00:00:00.000 Started by upstream project "autotest-per-patch" build number 132351 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.987 The recommended git tool is: git 00:00:00.987 using credential 00000000-0000-0000-0000-000000000002 00:00:00.989 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.001 Fetching changes from the remote Git repository 00:00:01.003 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.015 Using shallow fetch with depth 1 00:00:01.015 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.015 > git --version # timeout=10 00:00:01.028 > git --version # 'git version 2.39.2' 00:00:01.028 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.042 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.042 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.679 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.691 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.705 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.705 > git config core.sparsecheckout # timeout=10 00:00:06.716 > git read-tree -mu HEAD # timeout=10 00:00:06.734 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.757 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.757 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.855 [Pipeline] Start of Pipeline 00:00:06.868 [Pipeline] library 00:00:06.870 Loading library shm_lib@master 00:00:06.870 Library shm_lib@master is cached. Copying from home. 00:00:06.885 [Pipeline] node 00:00:21.887 Still waiting to schedule task 00:00:21.887 Waiting for next available executor on ‘vagrant-vm-host’ 00:14:03.393 Running on VM-host-SM38 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:14:03.395 [Pipeline] { 00:14:03.404 [Pipeline] catchError 00:14:03.406 [Pipeline] { 00:14:03.419 [Pipeline] wrap 00:14:03.430 [Pipeline] { 00:14:03.438 [Pipeline] stage 00:14:03.439 [Pipeline] { (Prologue) 00:14:03.459 [Pipeline] echo 00:14:03.461 Node: VM-host-SM38 00:14:03.467 [Pipeline] cleanWs 00:14:03.476 [WS-CLEANUP] Deleting project workspace... 00:14:03.476 [WS-CLEANUP] Deferred wipeout is used... 00:14:03.482 [WS-CLEANUP] done 00:14:03.660 [Pipeline] setCustomBuildProperty 00:14:03.777 [Pipeline] httpRequest 00:14:04.088 [Pipeline] echo 00:14:04.090 Sorcerer 10.211.164.20 is alive 00:14:04.099 [Pipeline] retry 00:14:04.100 [Pipeline] { 00:14:04.110 [Pipeline] httpRequest 00:14:04.113 HttpMethod: GET 00:14:04.114 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:04.114 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:04.115 Response Code: HTTP/1.1 200 OK 00:14:04.115 Success: Status code 200 is in the accepted range: 200,404 00:14:04.116 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:04.261 [Pipeline] } 00:14:04.280 [Pipeline] // retry 00:14:04.287 [Pipeline] sh 00:14:04.564 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:04.578 [Pipeline] httpRequest 00:14:05.126 [Pipeline] echo 00:14:05.128 Sorcerer 10.211.164.20 is alive 00:14:05.140 [Pipeline] retry 00:14:05.142 [Pipeline] { 00:14:05.163 [Pipeline] httpRequest 00:14:05.173 HttpMethod: GET 00:14:05.179 URL: http://10.211.164.20/packages/spdk_400f484f7a9b50c2a8ebe6def409514cdbc7140c.tar.gz 00:14:05.185 Sending request to url: http://10.211.164.20/packages/spdk_400f484f7a9b50c2a8ebe6def409514cdbc7140c.tar.gz 00:14:05.189 Response Code: HTTP/1.1 200 OK 00:14:05.192 Success: Status code 200 is in the accepted range: 200,404 00:14:05.197 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_400f484f7a9b50c2a8ebe6def409514cdbc7140c.tar.gz 00:14:10.881 [Pipeline] } 00:14:10.900 [Pipeline] // retry 00:14:10.909 [Pipeline] sh 00:14:11.183 + tar --no-same-owner -xf spdk_400f484f7a9b50c2a8ebe6def409514cdbc7140c.tar.gz 00:14:14.472 [Pipeline] sh 00:14:14.755 + git -C spdk log --oneline -n5 00:14:14.755 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:14:14.755 6f7b42a3a test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:14:14.755 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:14:14.755 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:14:14.755 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:14:14.801 [Pipeline] writeFile 00:14:14.819 [Pipeline] sh 00:14:15.097 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:14:15.108 [Pipeline] sh 00:14:15.389 + cat autorun-spdk.conf 00:14:15.389 SPDK_RUN_FUNCTIONAL_TEST=1 00:14:15.389 SPDK_TEST_NVMF=1 00:14:15.389 SPDK_TEST_NVMF_TRANSPORT=tcp 00:14:15.389 SPDK_TEST_URING=1 00:14:15.389 SPDK_TEST_USDT=1 00:14:15.389 SPDK_RUN_UBSAN=1 00:14:15.389 NET_TYPE=virt 00:14:15.389 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:15.396 RUN_NIGHTLY=0 00:14:15.398 [Pipeline] } 00:14:15.413 [Pipeline] // stage 00:14:15.433 [Pipeline] stage 00:14:15.435 [Pipeline] { (Run VM) 00:14:15.449 [Pipeline] sh 00:14:15.727 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:14:15.727 + echo 'Start stage prepare_nvme.sh' 00:14:15.727 Start stage prepare_nvme.sh 00:14:15.727 + [[ -n 8 ]] 00:14:15.727 + disk_prefix=ex8 00:14:15.727 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:14:15.727 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:14:15.727 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:14:15.727 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:15.727 ++ SPDK_TEST_NVMF=1 00:14:15.727 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:14:15.727 ++ SPDK_TEST_URING=1 00:14:15.727 ++ SPDK_TEST_USDT=1 00:14:15.727 ++ SPDK_RUN_UBSAN=1 00:14:15.727 ++ NET_TYPE=virt 00:14:15.727 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:15.727 ++ RUN_NIGHTLY=0 00:14:15.727 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:14:15.727 + nvme_files=() 00:14:15.727 + declare -A nvme_files 00:14:15.727 + backend_dir=/var/lib/libvirt/images/backends 00:14:15.727 + nvme_files['nvme.img']=5G 00:14:15.727 + nvme_files['nvme-cmb.img']=5G 00:14:15.727 + nvme_files['nvme-multi0.img']=4G 00:14:15.727 + nvme_files['nvme-multi1.img']=4G 00:14:15.727 + nvme_files['nvme-multi2.img']=4G 00:14:15.727 + nvme_files['nvme-openstack.img']=8G 00:14:15.727 + nvme_files['nvme-zns.img']=5G 00:14:15.727 + (( SPDK_TEST_NVME_PMR == 1 )) 00:14:15.727 + (( SPDK_TEST_FTL == 1 )) 00:14:15.727 + (( SPDK_TEST_NVME_FDP == 1 )) 00:14:15.727 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:14:15.727 + for nvme in "${!nvme_files[@]}" 00:14:15.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:14:15.727 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:14:15.727 + for nvme in "${!nvme_files[@]}" 00:14:15.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:14:15.727 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:14:15.727 + for nvme in "${!nvme_files[@]}" 00:14:15.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:14:15.727 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:14:15.727 + for nvme in "${!nvme_files[@]}" 00:14:15.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:14:15.727 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:14:15.727 + for nvme in "${!nvme_files[@]}" 00:14:15.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:14:15.727 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:14:15.727 + for nvme in "${!nvme_files[@]}" 00:14:15.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:14:15.727 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:14:15.727 + for nvme in "${!nvme_files[@]}" 00:14:15.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:14:16.659 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:14:16.659 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:14:16.659 + echo 'End stage prepare_nvme.sh' 00:14:16.659 End stage prepare_nvme.sh 00:14:16.671 [Pipeline] sh 00:14:16.952 + DISTRO=fedora39 00:14:16.952 + CPUS=10 00:14:16.952 + RAM=12288 00:14:16.952 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:14:16.952 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -H -a -v -f fedora39 00:14:16.952 00:14:16.952 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:14:16.952 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:14:16.952 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:14:16.952 HELP=0 00:14:16.952 DRY_RUN=0 00:14:16.952 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img, 00:14:16.952 NVME_DISKS_TYPE=nvme,nvme, 00:14:16.952 NVME_AUTO_CREATE=0 00:14:16.952 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img, 00:14:16.952 NVME_CMB=,, 00:14:16.952 NVME_PMR=,, 00:14:16.952 NVME_ZNS=,, 00:14:16.952 NVME_MS=,, 00:14:16.952 NVME_FDP=,, 00:14:16.952 SPDK_VAGRANT_DISTRO=fedora39 00:14:16.952 SPDK_VAGRANT_VMCPU=10 00:14:16.952 SPDK_VAGRANT_VMRAM=12288 00:14:16.952 SPDK_VAGRANT_PROVIDER=libvirt 00:14:16.952 SPDK_VAGRANT_HTTP_PROXY= 00:14:16.952 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:14:16.952 SPDK_OPENSTACK_NETWORK=0 00:14:16.952 VAGRANT_PACKAGE_BOX=0 00:14:16.952 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:14:16.952 FORCE_DISTRO=true 00:14:16.952 VAGRANT_BOX_VERSION= 00:14:16.952 EXTRA_VAGRANTFILES= 00:14:16.952 NIC_MODEL=e1000 00:14:16.952 00:14:16.952 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt' 00:14:16.952 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:14:19.482 Bringing machine 'default' up with 'libvirt' provider... 00:14:19.764 ==> default: Creating image (snapshot of base box volume). 00:14:20.022 ==> default: Creating domain with the following settings... 00:14:20.022 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732086703_64027c4c7aafe9bf0c21 00:14:20.022 ==> default: -- Domain type: kvm 00:14:20.022 ==> default: -- Cpus: 10 00:14:20.022 ==> default: -- Feature: acpi 00:14:20.022 ==> default: -- Feature: apic 00:14:20.022 ==> default: -- Feature: pae 00:14:20.022 ==> default: -- Memory: 12288M 00:14:20.022 ==> default: -- Memory Backing: hugepages: 00:14:20.022 ==> default: -- Management MAC: 00:14:20.022 ==> default: -- Loader: 00:14:20.022 ==> default: -- Nvram: 00:14:20.022 ==> default: -- Base box: spdk/fedora39 00:14:20.022 ==> default: -- Storage pool: default 00:14:20.022 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732086703_64027c4c7aafe9bf0c21.img (20G) 00:14:20.022 ==> default: -- Volume Cache: default 00:14:20.022 ==> default: -- Kernel: 00:14:20.022 ==> default: -- Initrd: 00:14:20.022 ==> default: -- Graphics Type: vnc 00:14:20.022 ==> default: -- Graphics Port: -1 00:14:20.022 ==> default: -- Graphics IP: 127.0.0.1 00:14:20.022 ==> default: -- Graphics Password: Not defined 00:14:20.022 ==> default: -- Video Type: cirrus 00:14:20.022 ==> default: -- Video VRAM: 9216 00:14:20.022 ==> default: -- Sound Type: 00:14:20.022 ==> default: -- Keymap: en-us 00:14:20.022 ==> default: -- TPM Path: 00:14:20.022 ==> default: -- INPUT: type=mouse, bus=ps2 00:14:20.022 ==> default: -- Command line args: 00:14:20.022 ==> default: -> value=-device, 00:14:20.022 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:14:20.022 ==> default: -> value=-drive, 00:14:20.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-0-drive0, 00:14:20.022 ==> default: -> value=-device, 00:14:20.022 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:20.022 ==> default: -> value=-device, 00:14:20.022 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:14:20.022 ==> default: -> value=-drive, 00:14:20.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:14:20.022 ==> default: -> value=-device, 00:14:20.022 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:20.022 ==> default: -> value=-drive, 00:14:20.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:14:20.022 ==> default: -> value=-device, 00:14:20.022 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:20.022 ==> default: -> value=-drive, 00:14:20.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:14:20.022 ==> default: -> value=-device, 00:14:20.022 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:20.022 ==> default: Creating shared folders metadata... 00:14:20.022 ==> default: Starting domain. 00:14:20.956 ==> default: Waiting for domain to get an IP address... 00:14:35.925 ==> default: Waiting for SSH to become available... 00:14:35.925 ==> default: Configuring and enabling network interfaces... 00:14:39.214 default: SSH address: 192.168.121.202:22 00:14:39.214 default: SSH username: vagrant 00:14:39.214 default: SSH auth method: private key 00:14:40.586 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:14:47.142 ==> default: Mounting SSHFS shared folder... 00:14:48.072 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:14:48.072 ==> default: Checking Mount.. 00:14:49.004 ==> default: Folder Successfully Mounted! 00:14:49.004 00:14:49.004 SUCCESS! 00:14:49.004 00:14:49.004 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:14:49.004 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:14:49.004 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:14:49.004 00:14:49.011 [Pipeline] } 00:14:49.031 [Pipeline] // stage 00:14:49.042 [Pipeline] dir 00:14:49.043 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt 00:14:49.044 [Pipeline] { 00:14:49.058 [Pipeline] catchError 00:14:49.060 [Pipeline] { 00:14:49.072 [Pipeline] sh 00:14:49.346 + vagrant ssh-config --host vagrant 00:14:49.346 + sed -ne '/^Host/,$p' 00:14:49.346 + tee ssh_conf 00:14:51.903 Host vagrant 00:14:51.903 HostName 192.168.121.202 00:14:51.903 User vagrant 00:14:51.903 Port 22 00:14:51.903 UserKnownHostsFile /dev/null 00:14:51.903 StrictHostKeyChecking no 00:14:51.903 PasswordAuthentication no 00:14:51.903 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:14:51.903 IdentitiesOnly yes 00:14:51.903 LogLevel FATAL 00:14:51.903 ForwardAgent yes 00:14:51.903 ForwardX11 yes 00:14:51.903 00:14:51.914 [Pipeline] withEnv 00:14:51.917 [Pipeline] { 00:14:51.932 [Pipeline] sh 00:14:52.209 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:14:52.209 source /etc/os-release 00:14:52.209 [[ -e /image.version ]] && img=$(< /image.version) 00:14:52.209 # Minimal, systemd-like check. 00:14:52.209 if [[ -e /.dockerenv ]]; then 00:14:52.209 # Clear garbage from the node'\''s name: 00:14:52.209 # agt-er_autotest_547-896 -> autotest_547-896 00:14:52.209 # $HOSTNAME is the actual container id 00:14:52.209 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:14:52.209 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:14:52.209 # We can assume this is a mount from a host where container is running, 00:14:52.209 # so fetch its hostname to easily identify the target swarm worker. 00:14:52.209 container="$(< /etc/hostname) ($agent)" 00:14:52.209 else 00:14:52.209 # Fallback 00:14:52.209 container=$agent 00:14:52.209 fi 00:14:52.209 fi 00:14:52.209 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:14:52.209 ' 00:14:52.220 [Pipeline] } 00:14:52.239 [Pipeline] // withEnv 00:14:52.249 [Pipeline] setCustomBuildProperty 00:14:52.264 [Pipeline] stage 00:14:52.267 [Pipeline] { (Tests) 00:14:52.283 [Pipeline] sh 00:14:52.563 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:14:52.838 [Pipeline] sh 00:14:53.121 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:14:53.394 [Pipeline] timeout 00:14:53.394 Timeout set to expire in 1 hr 0 min 00:14:53.395 [Pipeline] { 00:14:53.408 [Pipeline] sh 00:14:53.688 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:14:54.257 HEAD is now at 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:14:54.268 [Pipeline] sh 00:14:54.547 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:14:54.560 [Pipeline] sh 00:14:54.849 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:14:55.120 [Pipeline] sh 00:14:55.400 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo' 00:14:55.659 ++ readlink -f spdk_repo 00:14:55.659 + DIR_ROOT=/home/vagrant/spdk_repo 00:14:55.659 + [[ -n /home/vagrant/spdk_repo ]] 00:14:55.659 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:14:55.659 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:14:55.659 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:14:55.659 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:14:55.659 + [[ -d /home/vagrant/spdk_repo/output ]] 00:14:55.659 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:14:55.659 + cd /home/vagrant/spdk_repo 00:14:55.659 + source /etc/os-release 00:14:55.659 ++ NAME='Fedora Linux' 00:14:55.659 ++ VERSION='39 (Cloud Edition)' 00:14:55.659 ++ ID=fedora 00:14:55.659 ++ VERSION_ID=39 00:14:55.659 ++ VERSION_CODENAME= 00:14:55.659 ++ PLATFORM_ID=platform:f39 00:14:55.659 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:14:55.659 ++ ANSI_COLOR='0;38;2;60;110;180' 00:14:55.659 ++ LOGO=fedora-logo-icon 00:14:55.659 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:14:55.659 ++ HOME_URL=https://fedoraproject.org/ 00:14:55.659 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:14:55.659 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:14:55.659 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:14:55.659 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:14:55.659 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:14:55.659 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:14:55.659 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:14:55.659 ++ SUPPORT_END=2024-11-12 00:14:55.659 ++ VARIANT='Cloud Edition' 00:14:55.659 ++ VARIANT_ID=cloud 00:14:55.659 + uname -a 00:14:55.659 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:14:55.659 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:55.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:55.956 Hugepages 00:14:55.956 node hugesize free / total 00:14:55.956 node0 1048576kB 0 / 0 00:14:55.956 node0 2048kB 0 / 0 00:14:55.956 00:14:55.956 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:55.956 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:55.956 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:14:55.956 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:14:55.956 + rm -f /tmp/spdk-ld-path 00:14:55.956 + source autorun-spdk.conf 00:14:55.956 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:55.956 ++ SPDK_TEST_NVMF=1 00:14:55.956 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:14:55.956 ++ SPDK_TEST_URING=1 00:14:55.956 ++ SPDK_TEST_USDT=1 00:14:55.956 ++ SPDK_RUN_UBSAN=1 00:14:55.956 ++ NET_TYPE=virt 00:14:55.956 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:55.956 ++ RUN_NIGHTLY=0 00:14:55.956 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:14:55.956 + [[ -n '' ]] 00:14:55.956 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:14:55.956 + for M in /var/spdk/build-*-manifest.txt 00:14:55.956 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:14:55.956 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:55.956 + for M in /var/spdk/build-*-manifest.txt 00:14:55.956 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:14:55.956 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:55.956 + for M in /var/spdk/build-*-manifest.txt 00:14:55.956 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:14:56.217 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:56.217 ++ uname 00:14:56.217 + [[ Linux == \L\i\n\u\x ]] 00:14:56.217 + sudo dmesg -T 00:14:56.217 + sudo dmesg --clear 00:14:56.217 + dmesg_pid=4987 00:14:56.217 + [[ Fedora Linux == FreeBSD ]] 00:14:56.217 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:56.217 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:56.217 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:14:56.217 + [[ -x /usr/src/fio-static/fio ]] 00:14:56.217 + sudo dmesg -Tw 00:14:56.217 + export FIO_BIN=/usr/src/fio-static/fio 00:14:56.217 + FIO_BIN=/usr/src/fio-static/fio 00:14:56.217 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:14:56.217 + [[ ! -v VFIO_QEMU_BIN ]] 00:14:56.217 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:14:56.217 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:56.217 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:56.217 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:14:56.217 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:56.217 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:56.217 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:56.479 07:12:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:14:56.479 07:12:20 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:56.479 07:12:20 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:56.479 07:12:20 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:14:56.479 07:12:20 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:14:56.479 07:12:20 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:14:56.479 07:12:20 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:14:56.479 07:12:20 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:14:56.479 07:12:20 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:14:56.479 07:12:20 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:56.479 07:12:20 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:14:56.479 07:12:20 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:14:56.479 07:12:20 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:56.479 07:12:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:14:56.479 07:12:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.479 07:12:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:14:56.479 07:12:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:14:56.479 07:12:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.479 07:12:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.479 07:12:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.479 07:12:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.479 07:12:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.479 07:12:20 -- paths/export.sh@5 -- $ export PATH 00:14:56.479 07:12:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.479 07:12:20 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:14:56.479 07:12:20 -- common/autobuild_common.sh@493 -- $ date +%s 00:14:56.479 07:12:20 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732086740.XXXXXX 00:14:56.479 07:12:20 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732086740.GvcmjG 00:14:56.479 07:12:20 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:14:56.479 07:12:20 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:14:56.479 07:12:20 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:14:56.479 07:12:20 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:14:56.479 07:12:20 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:14:56.479 07:12:20 -- common/autobuild_common.sh@509 -- $ get_config_params 00:14:56.479 07:12:20 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:14:56.479 07:12:20 -- common/autotest_common.sh@10 -- $ set +x 00:14:56.479 07:12:20 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:14:56.479 07:12:20 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:14:56.479 07:12:20 -- pm/common@17 -- $ local monitor 00:14:56.479 07:12:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:56.479 07:12:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:56.479 07:12:20 -- pm/common@25 -- $ sleep 1 00:14:56.479 07:12:20 -- pm/common@21 -- $ date +%s 00:14:56.479 07:12:20 -- pm/common@21 -- $ date +%s 00:14:56.479 07:12:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732086740 00:14:56.479 07:12:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732086740 00:14:56.479 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732086740_collect-cpu-load.pm.log 00:14:56.479 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732086740_collect-vmstat.pm.log 00:14:57.414 07:12:21 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:14:57.414 07:12:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:14:57.414 07:12:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:14:57.414 07:12:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:14:57.414 07:12:21 -- spdk/autobuild.sh@16 -- $ date -u 00:14:57.414 Wed Nov 20 07:12:21 AM UTC 2024 00:14:57.414 07:12:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:14:57.414 v25.01-pre-202-g400f484f7 00:14:57.414 07:12:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:14:57.414 07:12:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:14:57.414 07:12:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:14:57.414 07:12:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:14:57.414 07:12:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:14:57.414 07:12:21 -- common/autotest_common.sh@10 -- $ set +x 00:14:57.414 ************************************ 00:14:57.414 START TEST ubsan 00:14:57.414 ************************************ 00:14:57.414 using ubsan 00:14:57.414 07:12:21 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:14:57.414 00:14:57.414 real 0m0.000s 00:14:57.414 user 0m0.000s 00:14:57.414 sys 0m0.000s 00:14:57.414 07:12:21 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:14:57.414 07:12:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:14:57.414 ************************************ 00:14:57.414 END TEST ubsan 00:14:57.414 ************************************ 00:14:57.414 07:12:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:14:57.414 07:12:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:14:57.414 07:12:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:14:57.414 07:12:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:14:57.414 07:12:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:14:57.414 07:12:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:14:57.414 07:12:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:14:57.414 07:12:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:14:57.414 07:12:21 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:14:57.672 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:57.672 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:57.929 Using 'verbs' RDMA provider 00:15:10.719 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:15:18.868 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:15:19.125 Creating mk/config.mk...done. 00:15:19.125 Creating mk/cc.flags.mk...done. 00:15:19.125 Type 'make' to build. 00:15:19.125 07:12:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:15:19.125 07:12:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:15:19.125 07:12:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:15:19.125 07:12:43 -- common/autotest_common.sh@10 -- $ set +x 00:15:19.125 ************************************ 00:15:19.125 START TEST make 00:15:19.125 ************************************ 00:15:19.125 07:12:43 make -- common/autotest_common.sh@1129 -- $ make -j10 00:15:19.383 make[1]: Nothing to be done for 'all'. 00:15:29.348 The Meson build system 00:15:29.348 Version: 1.5.0 00:15:29.348 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:15:29.348 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:15:29.348 Build type: native build 00:15:29.348 Program cat found: YES (/usr/bin/cat) 00:15:29.348 Project name: DPDK 00:15:29.349 Project version: 24.03.0 00:15:29.349 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:15:29.349 C linker for the host machine: cc ld.bfd 2.40-14 00:15:29.349 Host machine cpu family: x86_64 00:15:29.349 Host machine cpu: x86_64 00:15:29.349 Message: ## Building in Developer Mode ## 00:15:29.349 Program pkg-config found: YES (/usr/bin/pkg-config) 00:15:29.349 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:15:29.349 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:15:29.349 Program python3 found: YES (/usr/bin/python3) 00:15:29.349 Program cat found: YES (/usr/bin/cat) 00:15:29.349 Compiler for C supports arguments -march=native: YES 00:15:29.349 Checking for size of "void *" : 8 00:15:29.349 Checking for size of "void *" : 8 (cached) 00:15:29.349 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:15:29.349 Library m found: YES 00:15:29.349 Library numa found: YES 00:15:29.349 Has header "numaif.h" : YES 00:15:29.349 Library fdt found: NO 00:15:29.349 Library execinfo found: NO 00:15:29.349 Has header "execinfo.h" : YES 00:15:29.349 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:15:29.349 Run-time dependency libarchive found: NO (tried pkgconfig) 00:15:29.349 Run-time dependency libbsd found: NO (tried pkgconfig) 00:15:29.349 Run-time dependency jansson found: NO (tried pkgconfig) 00:15:29.349 Run-time dependency openssl found: YES 3.1.1 00:15:29.349 Run-time dependency libpcap found: YES 1.10.4 00:15:29.349 Has header "pcap.h" with dependency libpcap: YES 00:15:29.349 Compiler for C supports arguments -Wcast-qual: YES 00:15:29.349 Compiler for C supports arguments -Wdeprecated: YES 00:15:29.349 Compiler for C supports arguments -Wformat: YES 00:15:29.349 Compiler for C supports arguments -Wformat-nonliteral: NO 00:15:29.349 Compiler for C supports arguments -Wformat-security: NO 00:15:29.349 Compiler for C supports arguments -Wmissing-declarations: YES 00:15:29.349 Compiler for C supports arguments -Wmissing-prototypes: YES 00:15:29.349 Compiler for C supports arguments -Wnested-externs: YES 00:15:29.349 Compiler for C supports arguments -Wold-style-definition: YES 00:15:29.349 Compiler for C supports arguments -Wpointer-arith: YES 00:15:29.349 Compiler for C supports arguments -Wsign-compare: YES 00:15:29.349 Compiler for C supports arguments -Wstrict-prototypes: YES 00:15:29.349 Compiler for C supports arguments -Wundef: YES 00:15:29.349 Compiler for C supports arguments -Wwrite-strings: YES 00:15:29.349 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:15:29.349 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:15:29.349 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:15:29.349 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:15:29.349 Program objdump found: YES (/usr/bin/objdump) 00:15:29.349 Compiler for C supports arguments -mavx512f: YES 00:15:29.349 Checking if "AVX512 checking" compiles: YES 00:15:29.349 Fetching value of define "__SSE4_2__" : 1 00:15:29.349 Fetching value of define "__AES__" : 1 00:15:29.349 Fetching value of define "__AVX__" : 1 00:15:29.349 Fetching value of define "__AVX2__" : 1 00:15:29.349 Fetching value of define "__AVX512BW__" : 1 00:15:29.349 Fetching value of define "__AVX512CD__" : 1 00:15:29.349 Fetching value of define "__AVX512DQ__" : 1 00:15:29.349 Fetching value of define "__AVX512F__" : 1 00:15:29.349 Fetching value of define "__AVX512VL__" : 1 00:15:29.349 Fetching value of define "__PCLMUL__" : 1 00:15:29.349 Fetching value of define "__RDRND__" : 1 00:15:29.349 Fetching value of define "__RDSEED__" : 1 00:15:29.349 Fetching value of define "__VPCLMULQDQ__" : 1 00:15:29.349 Fetching value of define "__znver1__" : (undefined) 00:15:29.349 Fetching value of define "__znver2__" : (undefined) 00:15:29.349 Fetching value of define "__znver3__" : (undefined) 00:15:29.349 Fetching value of define "__znver4__" : (undefined) 00:15:29.349 Compiler for C supports arguments -Wno-format-truncation: YES 00:15:29.349 Message: lib/log: Defining dependency "log" 00:15:29.349 Message: lib/kvargs: Defining dependency "kvargs" 00:15:29.349 Message: lib/telemetry: Defining dependency "telemetry" 00:15:29.349 Checking for function "getentropy" : NO 00:15:29.349 Message: lib/eal: Defining dependency "eal" 00:15:29.349 Message: lib/ring: Defining dependency "ring" 00:15:29.349 Message: lib/rcu: Defining dependency "rcu" 00:15:29.349 Message: lib/mempool: Defining dependency "mempool" 00:15:29.349 Message: lib/mbuf: Defining dependency "mbuf" 00:15:29.349 Fetching value of define "__PCLMUL__" : 1 (cached) 00:15:29.349 Fetching value of define "__AVX512F__" : 1 (cached) 00:15:29.349 Fetching value of define "__AVX512BW__" : 1 (cached) 00:15:29.349 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:15:29.349 Fetching value of define "__AVX512VL__" : 1 (cached) 00:15:29.349 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:15:29.349 Compiler for C supports arguments -mpclmul: YES 00:15:29.349 Compiler for C supports arguments -maes: YES 00:15:29.349 Compiler for C supports arguments -mavx512f: YES (cached) 00:15:29.349 Compiler for C supports arguments -mavx512bw: YES 00:15:29.349 Compiler for C supports arguments -mavx512dq: YES 00:15:29.349 Compiler for C supports arguments -mavx512vl: YES 00:15:29.349 Compiler for C supports arguments -mvpclmulqdq: YES 00:15:29.349 Compiler for C supports arguments -mavx2: YES 00:15:29.349 Compiler for C supports arguments -mavx: YES 00:15:29.349 Message: lib/net: Defining dependency "net" 00:15:29.349 Message: lib/meter: Defining dependency "meter" 00:15:29.349 Message: lib/ethdev: Defining dependency "ethdev" 00:15:29.349 Message: lib/pci: Defining dependency "pci" 00:15:29.349 Message: lib/cmdline: Defining dependency "cmdline" 00:15:29.349 Message: lib/hash: Defining dependency "hash" 00:15:29.349 Message: lib/timer: Defining dependency "timer" 00:15:29.349 Message: lib/compressdev: Defining dependency "compressdev" 00:15:29.349 Message: lib/cryptodev: Defining dependency "cryptodev" 00:15:29.349 Message: lib/dmadev: Defining dependency "dmadev" 00:15:29.349 Compiler for C supports arguments -Wno-cast-qual: YES 00:15:29.349 Message: lib/power: Defining dependency "power" 00:15:29.349 Message: lib/reorder: Defining dependency "reorder" 00:15:29.349 Message: lib/security: Defining dependency "security" 00:15:29.349 Has header "linux/userfaultfd.h" : YES 00:15:29.349 Has header "linux/vduse.h" : YES 00:15:29.349 Message: lib/vhost: Defining dependency "vhost" 00:15:29.349 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:15:29.349 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:15:29.349 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:15:29.349 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:15:29.349 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:15:29.349 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:15:29.349 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:15:29.349 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:15:29.349 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:15:29.349 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:15:29.349 Program doxygen found: YES (/usr/local/bin/doxygen) 00:15:29.349 Configuring doxy-api-html.conf using configuration 00:15:29.349 Configuring doxy-api-man.conf using configuration 00:15:29.349 Program mandb found: YES (/usr/bin/mandb) 00:15:29.349 Program sphinx-build found: NO 00:15:29.349 Configuring rte_build_config.h using configuration 00:15:29.349 Message: 00:15:29.349 ================= 00:15:29.349 Applications Enabled 00:15:29.349 ================= 00:15:29.349 00:15:29.349 apps: 00:15:29.349 00:15:29.349 00:15:29.349 Message: 00:15:29.349 ================= 00:15:29.349 Libraries Enabled 00:15:29.349 ================= 00:15:29.349 00:15:29.349 libs: 00:15:29.349 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:15:29.349 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:15:29.349 cryptodev, dmadev, power, reorder, security, vhost, 00:15:29.349 00:15:29.349 Message: 00:15:29.349 =============== 00:15:29.349 Drivers Enabled 00:15:29.349 =============== 00:15:29.349 00:15:29.349 common: 00:15:29.349 00:15:29.349 bus: 00:15:29.349 pci, vdev, 00:15:29.349 mempool: 00:15:29.349 ring, 00:15:29.349 dma: 00:15:29.349 00:15:29.349 net: 00:15:29.349 00:15:29.349 crypto: 00:15:29.349 00:15:29.349 compress: 00:15:29.349 00:15:29.349 vdpa: 00:15:29.349 00:15:29.349 00:15:29.349 Message: 00:15:29.349 ================= 00:15:29.349 Content Skipped 00:15:29.349 ================= 00:15:29.349 00:15:29.349 apps: 00:15:29.349 dumpcap: explicitly disabled via build config 00:15:29.349 graph: explicitly disabled via build config 00:15:29.349 pdump: explicitly disabled via build config 00:15:29.349 proc-info: explicitly disabled via build config 00:15:29.349 test-acl: explicitly disabled via build config 00:15:29.349 test-bbdev: explicitly disabled via build config 00:15:29.349 test-cmdline: explicitly disabled via build config 00:15:29.349 test-compress-perf: explicitly disabled via build config 00:15:29.349 test-crypto-perf: explicitly disabled via build config 00:15:29.349 test-dma-perf: explicitly disabled via build config 00:15:29.349 test-eventdev: explicitly disabled via build config 00:15:29.349 test-fib: explicitly disabled via build config 00:15:29.349 test-flow-perf: explicitly disabled via build config 00:15:29.349 test-gpudev: explicitly disabled via build config 00:15:29.349 test-mldev: explicitly disabled via build config 00:15:29.349 test-pipeline: explicitly disabled via build config 00:15:29.349 test-pmd: explicitly disabled via build config 00:15:29.349 test-regex: explicitly disabled via build config 00:15:29.349 test-sad: explicitly disabled via build config 00:15:29.349 test-security-perf: explicitly disabled via build config 00:15:29.349 00:15:29.349 libs: 00:15:29.350 argparse: explicitly disabled via build config 00:15:29.350 metrics: explicitly disabled via build config 00:15:29.350 acl: explicitly disabled via build config 00:15:29.350 bbdev: explicitly disabled via build config 00:15:29.350 bitratestats: explicitly disabled via build config 00:15:29.350 bpf: explicitly disabled via build config 00:15:29.350 cfgfile: explicitly disabled via build config 00:15:29.350 distributor: explicitly disabled via build config 00:15:29.350 efd: explicitly disabled via build config 00:15:29.350 eventdev: explicitly disabled via build config 00:15:29.350 dispatcher: explicitly disabled via build config 00:15:29.350 gpudev: explicitly disabled via build config 00:15:29.350 gro: explicitly disabled via build config 00:15:29.350 gso: explicitly disabled via build config 00:15:29.350 ip_frag: explicitly disabled via build config 00:15:29.350 jobstats: explicitly disabled via build config 00:15:29.350 latencystats: explicitly disabled via build config 00:15:29.350 lpm: explicitly disabled via build config 00:15:29.350 member: explicitly disabled via build config 00:15:29.350 pcapng: explicitly disabled via build config 00:15:29.350 rawdev: explicitly disabled via build config 00:15:29.350 regexdev: explicitly disabled via build config 00:15:29.350 mldev: explicitly disabled via build config 00:15:29.350 rib: explicitly disabled via build config 00:15:29.350 sched: explicitly disabled via build config 00:15:29.350 stack: explicitly disabled via build config 00:15:29.350 ipsec: explicitly disabled via build config 00:15:29.350 pdcp: explicitly disabled via build config 00:15:29.350 fib: explicitly disabled via build config 00:15:29.350 port: explicitly disabled via build config 00:15:29.350 pdump: explicitly disabled via build config 00:15:29.350 table: explicitly disabled via build config 00:15:29.350 pipeline: explicitly disabled via build config 00:15:29.350 graph: explicitly disabled via build config 00:15:29.350 node: explicitly disabled via build config 00:15:29.350 00:15:29.350 drivers: 00:15:29.350 common/cpt: not in enabled drivers build config 00:15:29.350 common/dpaax: not in enabled drivers build config 00:15:29.350 common/iavf: not in enabled drivers build config 00:15:29.350 common/idpf: not in enabled drivers build config 00:15:29.350 common/ionic: not in enabled drivers build config 00:15:29.350 common/mvep: not in enabled drivers build config 00:15:29.350 common/octeontx: not in enabled drivers build config 00:15:29.350 bus/auxiliary: not in enabled drivers build config 00:15:29.350 bus/cdx: not in enabled drivers build config 00:15:29.350 bus/dpaa: not in enabled drivers build config 00:15:29.350 bus/fslmc: not in enabled drivers build config 00:15:29.350 bus/ifpga: not in enabled drivers build config 00:15:29.350 bus/platform: not in enabled drivers build config 00:15:29.350 bus/uacce: not in enabled drivers build config 00:15:29.350 bus/vmbus: not in enabled drivers build config 00:15:29.350 common/cnxk: not in enabled drivers build config 00:15:29.350 common/mlx5: not in enabled drivers build config 00:15:29.350 common/nfp: not in enabled drivers build config 00:15:29.350 common/nitrox: not in enabled drivers build config 00:15:29.350 common/qat: not in enabled drivers build config 00:15:29.350 common/sfc_efx: not in enabled drivers build config 00:15:29.350 mempool/bucket: not in enabled drivers build config 00:15:29.350 mempool/cnxk: not in enabled drivers build config 00:15:29.350 mempool/dpaa: not in enabled drivers build config 00:15:29.350 mempool/dpaa2: not in enabled drivers build config 00:15:29.350 mempool/octeontx: not in enabled drivers build config 00:15:29.350 mempool/stack: not in enabled drivers build config 00:15:29.350 dma/cnxk: not in enabled drivers build config 00:15:29.350 dma/dpaa: not in enabled drivers build config 00:15:29.350 dma/dpaa2: not in enabled drivers build config 00:15:29.350 dma/hisilicon: not in enabled drivers build config 00:15:29.350 dma/idxd: not in enabled drivers build config 00:15:29.350 dma/ioat: not in enabled drivers build config 00:15:29.350 dma/skeleton: not in enabled drivers build config 00:15:29.350 net/af_packet: not in enabled drivers build config 00:15:29.350 net/af_xdp: not in enabled drivers build config 00:15:29.350 net/ark: not in enabled drivers build config 00:15:29.350 net/atlantic: not in enabled drivers build config 00:15:29.350 net/avp: not in enabled drivers build config 00:15:29.350 net/axgbe: not in enabled drivers build config 00:15:29.350 net/bnx2x: not in enabled drivers build config 00:15:29.350 net/bnxt: not in enabled drivers build config 00:15:29.350 net/bonding: not in enabled drivers build config 00:15:29.350 net/cnxk: not in enabled drivers build config 00:15:29.350 net/cpfl: not in enabled drivers build config 00:15:29.350 net/cxgbe: not in enabled drivers build config 00:15:29.350 net/dpaa: not in enabled drivers build config 00:15:29.350 net/dpaa2: not in enabled drivers build config 00:15:29.350 net/e1000: not in enabled drivers build config 00:15:29.350 net/ena: not in enabled drivers build config 00:15:29.350 net/enetc: not in enabled drivers build config 00:15:29.350 net/enetfec: not in enabled drivers build config 00:15:29.350 net/enic: not in enabled drivers build config 00:15:29.350 net/failsafe: not in enabled drivers build config 00:15:29.350 net/fm10k: not in enabled drivers build config 00:15:29.350 net/gve: not in enabled drivers build config 00:15:29.350 net/hinic: not in enabled drivers build config 00:15:29.350 net/hns3: not in enabled drivers build config 00:15:29.350 net/i40e: not in enabled drivers build config 00:15:29.350 net/iavf: not in enabled drivers build config 00:15:29.350 net/ice: not in enabled drivers build config 00:15:29.350 net/idpf: not in enabled drivers build config 00:15:29.350 net/igc: not in enabled drivers build config 00:15:29.350 net/ionic: not in enabled drivers build config 00:15:29.350 net/ipn3ke: not in enabled drivers build config 00:15:29.350 net/ixgbe: not in enabled drivers build config 00:15:29.350 net/mana: not in enabled drivers build config 00:15:29.350 net/memif: not in enabled drivers build config 00:15:29.350 net/mlx4: not in enabled drivers build config 00:15:29.350 net/mlx5: not in enabled drivers build config 00:15:29.350 net/mvneta: not in enabled drivers build config 00:15:29.350 net/mvpp2: not in enabled drivers build config 00:15:29.350 net/netvsc: not in enabled drivers build config 00:15:29.350 net/nfb: not in enabled drivers build config 00:15:29.350 net/nfp: not in enabled drivers build config 00:15:29.350 net/ngbe: not in enabled drivers build config 00:15:29.350 net/null: not in enabled drivers build config 00:15:29.350 net/octeontx: not in enabled drivers build config 00:15:29.350 net/octeon_ep: not in enabled drivers build config 00:15:29.350 net/pcap: not in enabled drivers build config 00:15:29.350 net/pfe: not in enabled drivers build config 00:15:29.350 net/qede: not in enabled drivers build config 00:15:29.350 net/ring: not in enabled drivers build config 00:15:29.350 net/sfc: not in enabled drivers build config 00:15:29.350 net/softnic: not in enabled drivers build config 00:15:29.350 net/tap: not in enabled drivers build config 00:15:29.350 net/thunderx: not in enabled drivers build config 00:15:29.350 net/txgbe: not in enabled drivers build config 00:15:29.350 net/vdev_netvsc: not in enabled drivers build config 00:15:29.350 net/vhost: not in enabled drivers build config 00:15:29.350 net/virtio: not in enabled drivers build config 00:15:29.350 net/vmxnet3: not in enabled drivers build config 00:15:29.350 raw/*: missing internal dependency, "rawdev" 00:15:29.350 crypto/armv8: not in enabled drivers build config 00:15:29.350 crypto/bcmfs: not in enabled drivers build config 00:15:29.350 crypto/caam_jr: not in enabled drivers build config 00:15:29.350 crypto/ccp: not in enabled drivers build config 00:15:29.350 crypto/cnxk: not in enabled drivers build config 00:15:29.350 crypto/dpaa_sec: not in enabled drivers build config 00:15:29.350 crypto/dpaa2_sec: not in enabled drivers build config 00:15:29.350 crypto/ipsec_mb: not in enabled drivers build config 00:15:29.350 crypto/mlx5: not in enabled drivers build config 00:15:29.350 crypto/mvsam: not in enabled drivers build config 00:15:29.350 crypto/nitrox: not in enabled drivers build config 00:15:29.350 crypto/null: not in enabled drivers build config 00:15:29.350 crypto/octeontx: not in enabled drivers build config 00:15:29.350 crypto/openssl: not in enabled drivers build config 00:15:29.350 crypto/scheduler: not in enabled drivers build config 00:15:29.350 crypto/uadk: not in enabled drivers build config 00:15:29.350 crypto/virtio: not in enabled drivers build config 00:15:29.350 compress/isal: not in enabled drivers build config 00:15:29.350 compress/mlx5: not in enabled drivers build config 00:15:29.350 compress/nitrox: not in enabled drivers build config 00:15:29.350 compress/octeontx: not in enabled drivers build config 00:15:29.350 compress/zlib: not in enabled drivers build config 00:15:29.350 regex/*: missing internal dependency, "regexdev" 00:15:29.350 ml/*: missing internal dependency, "mldev" 00:15:29.350 vdpa/ifc: not in enabled drivers build config 00:15:29.350 vdpa/mlx5: not in enabled drivers build config 00:15:29.350 vdpa/nfp: not in enabled drivers build config 00:15:29.350 vdpa/sfc: not in enabled drivers build config 00:15:29.350 event/*: missing internal dependency, "eventdev" 00:15:29.350 baseband/*: missing internal dependency, "bbdev" 00:15:29.350 gpu/*: missing internal dependency, "gpudev" 00:15:29.350 00:15:29.350 00:15:29.350 Build targets in project: 84 00:15:29.350 00:15:29.350 DPDK 24.03.0 00:15:29.350 00:15:29.350 User defined options 00:15:29.350 buildtype : debug 00:15:29.350 default_library : shared 00:15:29.350 libdir : lib 00:15:29.350 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:29.350 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:15:29.350 c_link_args : 00:15:29.350 cpu_instruction_set: native 00:15:29.350 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:15:29.350 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:15:29.350 enable_docs : false 00:15:29.350 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:15:29.350 enable_kmods : false 00:15:29.350 max_lcores : 128 00:15:29.351 tests : false 00:15:29.351 00:15:29.351 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:15:29.351 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:15:29.351 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:15:29.351 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:15:29.351 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:15:29.351 [4/267] Linking static target lib/librte_kvargs.a 00:15:29.351 [5/267] Linking static target lib/librte_log.a 00:15:29.351 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:15:29.608 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:15:29.608 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:15:29.608 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:15:29.608 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:15:29.608 [11/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:15:29.608 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:15:29.608 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:15:29.866 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:15:29.866 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:15:29.866 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:15:29.866 [17/267] Linking static target lib/librte_telemetry.a 00:15:29.866 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:15:30.175 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:15:30.175 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:15:30.175 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:15:30.175 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:15:30.175 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:15:30.175 [24/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:15:30.175 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:15:30.175 [26/267] Linking target lib/librte_log.so.24.1 00:15:30.175 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:15:30.432 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:15:30.432 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:15:30.432 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:15:30.432 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:15:30.432 [32/267] Linking target lib/librte_kvargs.so.24.1 00:15:30.432 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:15:30.690 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:15:30.690 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:15:30.690 [36/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:15:30.690 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:15:30.690 [38/267] Linking target lib/librte_telemetry.so.24.1 00:15:30.690 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:15:30.690 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:15:30.690 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:15:30.690 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:15:30.690 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:15:30.690 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:15:30.947 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:15:30.947 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:15:30.947 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:15:30.947 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:15:30.947 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:15:31.204 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:15:31.204 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:15:31.204 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:15:31.204 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:15:31.462 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:15:31.462 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:15:31.462 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:15:31.462 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:15:31.462 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:15:31.462 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:15:31.462 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:15:31.462 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:15:31.462 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:15:31.720 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:15:31.720 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:15:31.720 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:15:31.720 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:15:31.977 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:15:31.977 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:15:31.977 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:15:31.977 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:15:31.977 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:15:31.977 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:15:31.977 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:15:32.234 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:15:32.234 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:15:32.234 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:15:32.234 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:15:32.234 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:15:32.492 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:15:32.492 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:15:32.492 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:15:32.492 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:15:32.492 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:15:32.492 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:15:32.749 [85/267] Linking static target lib/librte_eal.a 00:15:32.749 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:15:32.749 [87/267] Linking static target lib/librte_ring.a 00:15:32.749 [88/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:15:32.749 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:15:32.749 [90/267] Linking static target lib/librte_rcu.a 00:15:32.749 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:15:32.749 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:15:33.006 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:15:33.006 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:15:33.006 [95/267] Linking static target lib/librte_mempool.a 00:15:33.006 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:15:33.264 [97/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:15:33.264 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:15:33.264 [99/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:15:33.264 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:15:33.264 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:15:33.264 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:15:33.264 [103/267] Linking static target lib/librte_mbuf.a 00:15:33.264 [104/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:15:33.521 [105/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:15:33.521 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:15:33.521 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:15:33.521 [108/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:15:33.521 [109/267] Linking static target lib/librte_net.a 00:15:33.521 [110/267] Linking static target lib/librte_meter.a 00:15:33.778 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:15:33.778 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:15:33.778 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:15:33.778 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:15:33.778 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:15:33.778 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:15:34.036 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:15:34.036 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:15:34.036 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:15:34.294 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:15:34.294 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:15:34.552 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:15:34.552 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:15:34.552 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:15:34.552 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:15:34.552 [126/267] Linking static target lib/librte_pci.a 00:15:34.810 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:15:34.810 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:15:34.810 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:15:34.810 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:15:34.810 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:15:34.810 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:15:34.810 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:15:34.810 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:15:34.810 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:15:34.810 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:15:34.810 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:15:34.810 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:15:34.810 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:15:34.810 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:15:34.810 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:15:34.810 [142/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:34.810 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:15:34.810 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:15:35.069 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:15:35.069 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:15:35.069 [147/267] Linking static target lib/librte_cmdline.a 00:15:35.069 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:15:35.069 [149/267] Linking static target lib/librte_ethdev.a 00:15:35.069 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:15:35.329 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:15:35.329 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:15:35.329 [153/267] Linking static target lib/librte_timer.a 00:15:35.329 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:15:35.329 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:15:35.329 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:15:35.587 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:15:35.587 [158/267] Linking static target lib/librte_hash.a 00:15:35.587 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:15:35.587 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:15:35.587 [161/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:15:35.587 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:15:35.587 [163/267] Linking static target lib/librte_compressdev.a 00:15:35.877 [164/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:15:35.877 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:15:35.877 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:15:35.877 [167/267] Linking static target lib/librte_dmadev.a 00:15:35.877 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:15:35.877 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:15:35.877 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:15:36.135 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:15:36.135 [172/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:15:36.135 [173/267] Linking static target lib/librte_cryptodev.a 00:15:36.135 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:15:36.393 [175/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:15:36.393 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:15:36.393 [177/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:15:36.393 [178/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:15:36.393 [179/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:36.393 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:15:36.393 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:36.393 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:15:36.651 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:15:36.651 [184/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:15:36.651 [185/267] Linking static target lib/librte_reorder.a 00:15:36.651 [186/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:15:36.651 [187/267] Linking static target lib/librte_power.a 00:15:36.651 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:15:36.909 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:15:36.909 [190/267] Linking static target lib/librte_security.a 00:15:36.909 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:15:36.909 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:15:37.166 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:15:37.166 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:15:37.424 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:15:37.424 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:15:37.424 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:15:37.681 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:15:37.681 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:15:37.681 [200/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:15:37.681 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:15:37.938 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:15:37.938 [203/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:37.938 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:15:37.938 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:15:37.938 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:15:38.195 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:15:38.195 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:15:38.195 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:15:38.195 [210/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:15:38.195 [211/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:15:38.195 [212/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:15:38.195 [213/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:15:38.451 [214/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:15:38.451 [215/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:38.451 [216/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:38.451 [217/267] Linking static target drivers/librte_bus_vdev.a 00:15:38.451 [218/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:15:38.451 [219/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:38.451 [220/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:38.451 [221/267] Linking static target drivers/librte_bus_pci.a 00:15:38.451 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:15:38.451 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:38.451 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:38.451 [225/267] Linking static target drivers/librte_mempool_ring.a 00:15:38.451 [226/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:38.709 [227/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:39.274 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:15:39.274 [229/267] Linking static target lib/librte_vhost.a 00:15:40.206 [230/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:15:40.206 [231/267] Linking target lib/librte_eal.so.24.1 00:15:40.206 [232/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:15:40.206 [233/267] Linking target lib/librte_ring.so.24.1 00:15:40.206 [234/267] Linking target lib/librte_pci.so.24.1 00:15:40.206 [235/267] Linking target lib/librte_meter.so.24.1 00:15:40.206 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:15:40.206 [237/267] Linking target lib/librte_timer.so.24.1 00:15:40.206 [238/267] Linking target lib/librte_dmadev.so.24.1 00:15:40.465 [239/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:15:40.465 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:15:40.465 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:15:40.465 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:15:40.465 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:15:40.465 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:15:40.465 [245/267] Linking target drivers/librte_bus_pci.so.24.1 00:15:40.465 [246/267] Linking target lib/librte_rcu.so.24.1 00:15:40.465 [247/267] Linking target lib/librte_mempool.so.24.1 00:15:40.465 [248/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:15:40.465 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:15:40.465 [250/267] Linking target drivers/librte_mempool_ring.so.24.1 00:15:40.465 [251/267] Linking target lib/librte_mbuf.so.24.1 00:15:40.723 [252/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:15:40.723 [253/267] Linking target lib/librte_reorder.so.24.1 00:15:40.723 [254/267] Linking target lib/librte_compressdev.so.24.1 00:15:40.723 [255/267] Linking target lib/librte_cryptodev.so.24.1 00:15:40.723 [256/267] Linking target lib/librte_net.so.24.1 00:15:40.723 [257/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:40.723 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:15:40.723 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:15:40.981 [260/267] Linking target lib/librte_cmdline.so.24.1 00:15:40.981 [261/267] Linking target lib/librte_security.so.24.1 00:15:40.981 [262/267] Linking target lib/librte_hash.so.24.1 00:15:40.981 [263/267] Linking target lib/librte_ethdev.so.24.1 00:15:40.981 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:15:40.981 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:15:40.981 [266/267] Linking target lib/librte_power.so.24.1 00:15:40.981 [267/267] Linking target lib/librte_vhost.so.24.1 00:15:40.981 INFO: autodetecting backend as ninja 00:15:40.981 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:15:59.081 CC lib/ut_mock/mock.o 00:15:59.081 CC lib/log/log.o 00:15:59.081 CC lib/log/log_deprecated.o 00:15:59.081 CC lib/ut/ut.o 00:15:59.081 CC lib/log/log_flags.o 00:15:59.081 LIB libspdk_ut.a 00:15:59.081 LIB libspdk_log.a 00:15:59.081 LIB libspdk_ut_mock.a 00:15:59.081 SO libspdk_ut.so.2.0 00:15:59.081 SO libspdk_log.so.7.1 00:15:59.081 SO libspdk_ut_mock.so.6.0 00:15:59.081 SYMLINK libspdk_ut.so 00:15:59.081 SYMLINK libspdk_log.so 00:15:59.081 SYMLINK libspdk_ut_mock.so 00:15:59.081 CC lib/dma/dma.o 00:15:59.081 CC lib/ioat/ioat.o 00:15:59.081 CC lib/util/base64.o 00:15:59.081 CC lib/util/cpuset.o 00:15:59.081 CC lib/util/bit_array.o 00:15:59.081 CC lib/util/crc16.o 00:15:59.081 CC lib/util/crc32.o 00:15:59.081 CXX lib/trace_parser/trace.o 00:15:59.081 CC lib/util/crc32c.o 00:15:59.081 CC lib/vfio_user/host/vfio_user_pci.o 00:15:59.081 CC lib/util/crc32_ieee.o 00:15:59.081 CC lib/util/crc64.o 00:15:59.081 CC lib/util/dif.o 00:15:59.081 CC lib/util/fd.o 00:15:59.081 LIB libspdk_dma.a 00:15:59.081 CC lib/util/fd_group.o 00:15:59.081 SO libspdk_dma.so.5.0 00:15:59.081 CC lib/util/file.o 00:15:59.081 SYMLINK libspdk_dma.so 00:15:59.081 CC lib/util/hexlify.o 00:15:59.081 CC lib/util/iov.o 00:15:59.081 CC lib/vfio_user/host/vfio_user.o 00:15:59.081 CC lib/util/math.o 00:15:59.081 LIB libspdk_ioat.a 00:15:59.081 SO libspdk_ioat.so.7.0 00:15:59.081 SYMLINK libspdk_ioat.so 00:15:59.081 CC lib/util/net.o 00:15:59.081 CC lib/util/pipe.o 00:15:59.081 CC lib/util/strerror_tls.o 00:15:59.081 CC lib/util/string.o 00:15:59.081 CC lib/util/uuid.o 00:15:59.081 CC lib/util/xor.o 00:15:59.081 CC lib/util/zipf.o 00:15:59.081 LIB libspdk_vfio_user.a 00:15:59.081 SO libspdk_vfio_user.so.5.0 00:15:59.081 CC lib/util/md5.o 00:15:59.081 SYMLINK libspdk_vfio_user.so 00:15:59.081 LIB libspdk_util.a 00:15:59.081 SO libspdk_util.so.10.1 00:15:59.081 LIB libspdk_trace_parser.a 00:15:59.081 SYMLINK libspdk_util.so 00:15:59.081 SO libspdk_trace_parser.so.6.0 00:15:59.339 SYMLINK libspdk_trace_parser.so 00:15:59.339 CC lib/vmd/vmd.o 00:15:59.339 CC lib/rdma_utils/rdma_utils.o 00:15:59.339 CC lib/vmd/led.o 00:15:59.339 CC lib/conf/conf.o 00:15:59.339 CC lib/env_dpdk/env.o 00:15:59.339 CC lib/idxd/idxd.o 00:15:59.339 CC lib/json/json_parse.o 00:15:59.339 CC lib/idxd/idxd_user.o 00:15:59.339 CC lib/env_dpdk/memory.o 00:15:59.339 CC lib/json/json_util.o 00:15:59.339 CC lib/idxd/idxd_kernel.o 00:15:59.339 CC lib/json/json_write.o 00:15:59.597 LIB libspdk_conf.a 00:15:59.597 CC lib/env_dpdk/pci.o 00:15:59.597 CC lib/env_dpdk/init.o 00:15:59.597 SO libspdk_conf.so.6.0 00:15:59.597 SYMLINK libspdk_conf.so 00:15:59.597 CC lib/env_dpdk/threads.o 00:15:59.597 CC lib/env_dpdk/pci_ioat.o 00:15:59.597 LIB libspdk_rdma_utils.a 00:15:59.597 SO libspdk_rdma_utils.so.1.0 00:15:59.597 LIB libspdk_json.a 00:15:59.597 SO libspdk_json.so.6.0 00:15:59.597 SYMLINK libspdk_rdma_utils.so 00:15:59.597 CC lib/env_dpdk/pci_virtio.o 00:15:59.597 CC lib/env_dpdk/pci_vmd.o 00:15:59.597 SYMLINK libspdk_json.so 00:15:59.854 CC lib/env_dpdk/pci_idxd.o 00:15:59.854 CC lib/env_dpdk/pci_event.o 00:15:59.854 LIB libspdk_idxd.a 00:15:59.854 LIB libspdk_vmd.a 00:15:59.854 SO libspdk_idxd.so.12.1 00:15:59.854 CC lib/env_dpdk/sigbus_handler.o 00:15:59.854 CC lib/env_dpdk/pci_dpdk.o 00:15:59.854 CC lib/rdma_provider/common.o 00:15:59.854 SO libspdk_vmd.so.6.0 00:15:59.854 CC lib/jsonrpc/jsonrpc_server.o 00:15:59.854 SYMLINK libspdk_idxd.so 00:15:59.854 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:15:59.854 SYMLINK libspdk_vmd.so 00:15:59.854 CC lib/jsonrpc/jsonrpc_client.o 00:15:59.854 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:15:59.854 CC lib/rdma_provider/rdma_provider_verbs.o 00:15:59.854 CC lib/env_dpdk/pci_dpdk_2207.o 00:15:59.854 CC lib/env_dpdk/pci_dpdk_2211.o 00:16:00.112 LIB libspdk_jsonrpc.a 00:16:00.112 LIB libspdk_rdma_provider.a 00:16:00.112 SO libspdk_jsonrpc.so.6.0 00:16:00.112 SO libspdk_rdma_provider.so.7.0 00:16:00.112 SYMLINK libspdk_jsonrpc.so 00:16:00.112 SYMLINK libspdk_rdma_provider.so 00:16:00.112 LIB libspdk_env_dpdk.a 00:16:00.370 SO libspdk_env_dpdk.so.15.1 00:16:00.370 CC lib/rpc/rpc.o 00:16:00.370 SYMLINK libspdk_env_dpdk.so 00:16:00.628 LIB libspdk_rpc.a 00:16:00.628 SO libspdk_rpc.so.6.0 00:16:00.628 SYMLINK libspdk_rpc.so 00:16:00.886 CC lib/notify/notify_rpc.o 00:16:00.886 CC lib/notify/notify.o 00:16:00.886 CC lib/trace/trace.o 00:16:00.886 CC lib/trace/trace_rpc.o 00:16:00.886 CC lib/trace/trace_flags.o 00:16:00.886 CC lib/keyring/keyring.o 00:16:00.886 CC lib/keyring/keyring_rpc.o 00:16:00.886 LIB libspdk_notify.a 00:16:00.886 SO libspdk_notify.so.6.0 00:16:00.886 LIB libspdk_trace.a 00:16:00.886 LIB libspdk_keyring.a 00:16:00.886 SO libspdk_trace.so.11.0 00:16:00.886 SO libspdk_keyring.so.2.0 00:16:00.886 SYMLINK libspdk_notify.so 00:16:01.142 SYMLINK libspdk_trace.so 00:16:01.142 SYMLINK libspdk_keyring.so 00:16:01.142 CC lib/sock/sock.o 00:16:01.142 CC lib/sock/sock_rpc.o 00:16:01.142 CC lib/thread/thread.o 00:16:01.142 CC lib/thread/iobuf.o 00:16:01.399 LIB libspdk_sock.a 00:16:01.659 SO libspdk_sock.so.10.0 00:16:01.659 SYMLINK libspdk_sock.so 00:16:01.917 CC lib/nvme/nvme_ctrlr_cmd.o 00:16:01.917 CC lib/nvme/nvme_fabric.o 00:16:01.917 CC lib/nvme/nvme_ns_cmd.o 00:16:01.917 CC lib/nvme/nvme_ctrlr.o 00:16:01.917 CC lib/nvme/nvme.o 00:16:01.917 CC lib/nvme/nvme_ns.o 00:16:01.917 CC lib/nvme/nvme_pcie.o 00:16:01.917 CC lib/nvme/nvme_qpair.o 00:16:01.917 CC lib/nvme/nvme_pcie_common.o 00:16:02.175 CC lib/nvme/nvme_quirks.o 00:16:02.432 LIB libspdk_thread.a 00:16:02.432 CC lib/nvme/nvme_transport.o 00:16:02.432 SO libspdk_thread.so.11.0 00:16:02.432 CC lib/nvme/nvme_discovery.o 00:16:02.432 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:16:02.432 SYMLINK libspdk_thread.so 00:16:02.432 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:16:02.432 CC lib/nvme/nvme_tcp.o 00:16:02.690 CC lib/nvme/nvme_opal.o 00:16:02.690 CC lib/nvme/nvme_io_msg.o 00:16:02.690 CC lib/nvme/nvme_poll_group.o 00:16:02.690 CC lib/nvme/nvme_zns.o 00:16:02.947 CC lib/nvme/nvme_stubs.o 00:16:02.947 CC lib/nvme/nvme_auth.o 00:16:02.947 CC lib/nvme/nvme_cuse.o 00:16:02.947 CC lib/nvme/nvme_rdma.o 00:16:03.205 CC lib/accel/accel.o 00:16:03.205 CC lib/blob/blobstore.o 00:16:03.205 CC lib/accel/accel_rpc.o 00:16:03.205 CC lib/init/json_config.o 00:16:03.205 CC lib/virtio/virtio.o 00:16:03.463 CC lib/virtio/virtio_vhost_user.o 00:16:03.463 CC lib/init/subsystem.o 00:16:03.463 CC lib/init/subsystem_rpc.o 00:16:03.463 CC lib/fsdev/fsdev.o 00:16:03.463 CC lib/fsdev/fsdev_io.o 00:16:03.463 CC lib/init/rpc.o 00:16:03.721 CC lib/virtio/virtio_vfio_user.o 00:16:03.721 CC lib/fsdev/fsdev_rpc.o 00:16:03.721 LIB libspdk_init.a 00:16:03.721 CC lib/accel/accel_sw.o 00:16:03.721 SO libspdk_init.so.6.0 00:16:03.721 CC lib/blob/request.o 00:16:03.721 SYMLINK libspdk_init.so 00:16:03.721 CC lib/blob/zeroes.o 00:16:03.721 CC lib/blob/blob_bs_dev.o 00:16:03.721 CC lib/virtio/virtio_pci.o 00:16:03.979 CC lib/event/app.o 00:16:03.979 CC lib/event/log_rpc.o 00:16:03.979 CC lib/event/reactor.o 00:16:03.979 CC lib/event/app_rpc.o 00:16:03.979 LIB libspdk_accel.a 00:16:03.979 CC lib/event/scheduler_static.o 00:16:03.979 LIB libspdk_fsdev.a 00:16:03.979 SO libspdk_accel.so.16.0 00:16:03.979 LIB libspdk_virtio.a 00:16:03.979 SO libspdk_fsdev.so.2.0 00:16:03.979 LIB libspdk_nvme.a 00:16:03.979 SO libspdk_virtio.so.7.0 00:16:03.979 SYMLINK libspdk_accel.so 00:16:03.979 SYMLINK libspdk_fsdev.so 00:16:04.239 SYMLINK libspdk_virtio.so 00:16:04.239 SO libspdk_nvme.so.15.0 00:16:04.239 CC lib/bdev/bdev.o 00:16:04.239 CC lib/bdev/bdev_rpc.o 00:16:04.239 CC lib/bdev/bdev_zone.o 00:16:04.239 CC lib/bdev/scsi_nvme.o 00:16:04.239 CC lib/bdev/part.o 00:16:04.239 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:16:04.239 LIB libspdk_event.a 00:16:04.239 SO libspdk_event.so.14.0 00:16:04.239 SYMLINK libspdk_nvme.so 00:16:04.497 SYMLINK libspdk_event.so 00:16:05.063 LIB libspdk_fuse_dispatcher.a 00:16:05.063 SO libspdk_fuse_dispatcher.so.1.0 00:16:05.063 SYMLINK libspdk_fuse_dispatcher.so 00:16:05.627 LIB libspdk_blob.a 00:16:05.628 SO libspdk_blob.so.11.0 00:16:05.885 SYMLINK libspdk_blob.so 00:16:05.885 CC lib/lvol/lvol.o 00:16:05.885 CC lib/blobfs/blobfs.o 00:16:05.885 CC lib/blobfs/tree.o 00:16:06.448 LIB libspdk_bdev.a 00:16:06.448 SO libspdk_bdev.so.17.0 00:16:06.448 SYMLINK libspdk_bdev.so 00:16:06.705 CC lib/nbd/nbd.o 00:16:06.705 CC lib/nbd/nbd_rpc.o 00:16:06.705 CC lib/scsi/dev.o 00:16:06.705 CC lib/scsi/lun.o 00:16:06.705 CC lib/scsi/port.o 00:16:06.706 CC lib/nvmf/ctrlr.o 00:16:06.706 CC lib/ublk/ublk.o 00:16:06.706 CC lib/ftl/ftl_core.o 00:16:06.706 LIB libspdk_blobfs.a 00:16:06.706 SO libspdk_blobfs.so.10.0 00:16:06.706 LIB libspdk_lvol.a 00:16:06.706 SYMLINK libspdk_blobfs.so 00:16:06.706 SO libspdk_lvol.so.10.0 00:16:06.706 CC lib/ublk/ublk_rpc.o 00:16:06.706 CC lib/scsi/scsi.o 00:16:06.706 SYMLINK libspdk_lvol.so 00:16:06.706 CC lib/ftl/ftl_init.o 00:16:06.706 CC lib/nvmf/ctrlr_discovery.o 00:16:06.962 CC lib/scsi/scsi_bdev.o 00:16:06.962 CC lib/scsi/scsi_pr.o 00:16:06.963 CC lib/scsi/scsi_rpc.o 00:16:06.963 CC lib/scsi/task.o 00:16:06.963 CC lib/ftl/ftl_layout.o 00:16:06.963 LIB libspdk_nbd.a 00:16:06.963 CC lib/ftl/ftl_debug.o 00:16:06.963 SO libspdk_nbd.so.7.0 00:16:06.963 CC lib/ftl/ftl_io.o 00:16:06.963 SYMLINK libspdk_nbd.so 00:16:06.963 CC lib/ftl/ftl_sb.o 00:16:07.219 CC lib/ftl/ftl_l2p.o 00:16:07.219 LIB libspdk_ublk.a 00:16:07.219 CC lib/ftl/ftl_l2p_flat.o 00:16:07.219 SO libspdk_ublk.so.3.0 00:16:07.219 CC lib/ftl/ftl_nv_cache.o 00:16:07.219 CC lib/ftl/ftl_band.o 00:16:07.219 CC lib/ftl/ftl_band_ops.o 00:16:07.219 SYMLINK libspdk_ublk.so 00:16:07.219 CC lib/ftl/ftl_writer.o 00:16:07.219 CC lib/ftl/ftl_rq.o 00:16:07.219 CC lib/ftl/ftl_reloc.o 00:16:07.219 LIB libspdk_scsi.a 00:16:07.219 SO libspdk_scsi.so.9.0 00:16:07.219 CC lib/ftl/ftl_l2p_cache.o 00:16:07.476 SYMLINK libspdk_scsi.so 00:16:07.476 CC lib/ftl/ftl_p2l.o 00:16:07.476 CC lib/ftl/ftl_p2l_log.o 00:16:07.476 CC lib/ftl/mngt/ftl_mngt.o 00:16:07.476 CC lib/iscsi/conn.o 00:16:07.476 CC lib/iscsi/init_grp.o 00:16:07.476 CC lib/iscsi/iscsi.o 00:16:07.476 CC lib/vhost/vhost.o 00:16:07.732 CC lib/iscsi/param.o 00:16:07.732 CC lib/iscsi/portal_grp.o 00:16:07.732 CC lib/iscsi/tgt_node.o 00:16:07.732 CC lib/vhost/vhost_rpc.o 00:16:07.732 CC lib/vhost/vhost_scsi.o 00:16:08.034 CC lib/vhost/vhost_blk.o 00:16:08.034 CC lib/vhost/rte_vhost_user.o 00:16:08.034 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:16:08.034 CC lib/iscsi/iscsi_subsystem.o 00:16:08.034 CC lib/iscsi/iscsi_rpc.o 00:16:08.034 CC lib/nvmf/ctrlr_bdev.o 00:16:08.034 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:16:08.311 CC lib/ftl/mngt/ftl_mngt_startup.o 00:16:08.311 CC lib/nvmf/subsystem.o 00:16:08.311 CC lib/nvmf/nvmf.o 00:16:08.311 CC lib/iscsi/task.o 00:16:08.311 CC lib/ftl/mngt/ftl_mngt_md.o 00:16:08.311 CC lib/ftl/mngt/ftl_mngt_misc.o 00:16:08.311 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:16:08.568 LIB libspdk_iscsi.a 00:16:08.568 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:16:08.568 SO libspdk_iscsi.so.8.0 00:16:08.568 CC lib/ftl/mngt/ftl_mngt_band.o 00:16:08.568 CC lib/nvmf/nvmf_rpc.o 00:16:08.568 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:16:08.568 CC lib/nvmf/transport.o 00:16:08.568 CC lib/nvmf/tcp.o 00:16:08.568 CC lib/nvmf/stubs.o 00:16:08.825 SYMLINK libspdk_iscsi.so 00:16:08.825 CC lib/nvmf/mdns_server.o 00:16:08.825 LIB libspdk_vhost.a 00:16:08.825 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:16:08.825 SO libspdk_vhost.so.8.0 00:16:08.825 CC lib/nvmf/rdma.o 00:16:08.825 SYMLINK libspdk_vhost.so 00:16:09.083 CC lib/nvmf/auth.o 00:16:09.083 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:16:09.083 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:16:09.083 CC lib/ftl/utils/ftl_conf.o 00:16:09.083 CC lib/ftl/utils/ftl_md.o 00:16:09.083 CC lib/ftl/utils/ftl_mempool.o 00:16:09.340 CC lib/ftl/utils/ftl_bitmap.o 00:16:09.340 CC lib/ftl/utils/ftl_property.o 00:16:09.340 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:16:09.340 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:16:09.340 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:16:09.340 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:16:09.340 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:16:09.340 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:16:09.340 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:16:09.340 CC lib/ftl/upgrade/ftl_sb_v3.o 00:16:09.598 CC lib/ftl/upgrade/ftl_sb_v5.o 00:16:09.598 CC lib/ftl/nvc/ftl_nvc_dev.o 00:16:09.598 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:16:09.598 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:16:09.598 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:16:09.598 CC lib/ftl/base/ftl_base_dev.o 00:16:09.598 CC lib/ftl/base/ftl_base_bdev.o 00:16:09.598 CC lib/ftl/ftl_trace.o 00:16:09.857 LIB libspdk_ftl.a 00:16:10.115 SO libspdk_ftl.so.9.0 00:16:10.115 SYMLINK libspdk_ftl.so 00:16:10.680 LIB libspdk_nvmf.a 00:16:10.680 SO libspdk_nvmf.so.20.0 00:16:10.680 SYMLINK libspdk_nvmf.so 00:16:10.937 CC module/env_dpdk/env_dpdk_rpc.o 00:16:11.262 CC module/blob/bdev/blob_bdev.o 00:16:11.262 CC module/keyring/file/keyring.o 00:16:11.262 CC module/sock/uring/uring.o 00:16:11.262 CC module/accel/error/accel_error.o 00:16:11.262 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:16:11.262 CC module/scheduler/gscheduler/gscheduler.o 00:16:11.262 CC module/scheduler/dynamic/scheduler_dynamic.o 00:16:11.262 CC module/sock/posix/posix.o 00:16:11.262 CC module/fsdev/aio/fsdev_aio.o 00:16:11.262 LIB libspdk_env_dpdk_rpc.a 00:16:11.262 SO libspdk_env_dpdk_rpc.so.6.0 00:16:11.262 SYMLINK libspdk_env_dpdk_rpc.so 00:16:11.262 CC module/fsdev/aio/fsdev_aio_rpc.o 00:16:11.262 CC module/keyring/file/keyring_rpc.o 00:16:11.262 LIB libspdk_scheduler_gscheduler.a 00:16:11.262 LIB libspdk_scheduler_dpdk_governor.a 00:16:11.262 SO libspdk_scheduler_gscheduler.so.4.0 00:16:11.262 SO libspdk_scheduler_dpdk_governor.so.4.0 00:16:11.262 CC module/accel/error/accel_error_rpc.o 00:16:11.262 LIB libspdk_scheduler_dynamic.a 00:16:11.262 SYMLINK libspdk_scheduler_gscheduler.so 00:16:11.262 CC module/fsdev/aio/linux_aio_mgr.o 00:16:11.262 SYMLINK libspdk_scheduler_dpdk_governor.so 00:16:11.262 LIB libspdk_blob_bdev.a 00:16:11.262 SO libspdk_scheduler_dynamic.so.4.0 00:16:11.262 LIB libspdk_keyring_file.a 00:16:11.262 SO libspdk_blob_bdev.so.11.0 00:16:11.262 SO libspdk_keyring_file.so.2.0 00:16:11.537 SYMLINK libspdk_scheduler_dynamic.so 00:16:11.537 LIB libspdk_accel_error.a 00:16:11.537 SYMLINK libspdk_blob_bdev.so 00:16:11.537 SO libspdk_accel_error.so.2.0 00:16:11.537 SYMLINK libspdk_keyring_file.so 00:16:11.537 SYMLINK libspdk_accel_error.so 00:16:11.537 CC module/accel/ioat/accel_ioat.o 00:16:11.537 CC module/accel/dsa/accel_dsa.o 00:16:11.537 CC module/accel/iaa/accel_iaa.o 00:16:11.537 CC module/keyring/linux/keyring.o 00:16:11.537 CC module/bdev/delay/vbdev_delay.o 00:16:11.537 CC module/accel/ioat/accel_ioat_rpc.o 00:16:11.537 CC module/bdev/error/vbdev_error.o 00:16:11.537 LIB libspdk_fsdev_aio.a 00:16:11.537 LIB libspdk_sock_uring.a 00:16:11.796 SO libspdk_fsdev_aio.so.1.0 00:16:11.796 SO libspdk_sock_uring.so.5.0 00:16:11.796 CC module/blobfs/bdev/blobfs_bdev.o 00:16:11.796 CC module/keyring/linux/keyring_rpc.o 00:16:11.796 CC module/accel/iaa/accel_iaa_rpc.o 00:16:11.796 LIB libspdk_sock_posix.a 00:16:11.796 SYMLINK libspdk_fsdev_aio.so 00:16:11.796 SO libspdk_sock_posix.so.6.0 00:16:11.796 SYMLINK libspdk_sock_uring.so 00:16:11.796 CC module/bdev/delay/vbdev_delay_rpc.o 00:16:11.796 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:16:11.796 LIB libspdk_accel_ioat.a 00:16:11.796 SO libspdk_accel_ioat.so.6.0 00:16:11.797 CC module/accel/dsa/accel_dsa_rpc.o 00:16:11.797 SYMLINK libspdk_sock_posix.so 00:16:11.797 CC module/bdev/error/vbdev_error_rpc.o 00:16:11.797 LIB libspdk_keyring_linux.a 00:16:11.797 SYMLINK libspdk_accel_ioat.so 00:16:11.797 SO libspdk_keyring_linux.so.1.0 00:16:11.797 LIB libspdk_accel_iaa.a 00:16:11.797 SO libspdk_accel_iaa.so.3.0 00:16:11.797 LIB libspdk_blobfs_bdev.a 00:16:11.797 SYMLINK libspdk_keyring_linux.so 00:16:11.797 LIB libspdk_accel_dsa.a 00:16:11.797 SO libspdk_blobfs_bdev.so.6.0 00:16:11.797 SYMLINK libspdk_accel_iaa.so 00:16:12.055 LIB libspdk_bdev_delay.a 00:16:12.055 SO libspdk_accel_dsa.so.5.0 00:16:12.055 LIB libspdk_bdev_error.a 00:16:12.055 SO libspdk_bdev_delay.so.6.0 00:16:12.055 CC module/bdev/gpt/gpt.o 00:16:12.055 SYMLINK libspdk_blobfs_bdev.so 00:16:12.055 SO libspdk_bdev_error.so.6.0 00:16:12.055 CC module/bdev/lvol/vbdev_lvol.o 00:16:12.055 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:16:12.055 SYMLINK libspdk_accel_dsa.so 00:16:12.055 CC module/bdev/gpt/vbdev_gpt.o 00:16:12.055 SYMLINK libspdk_bdev_delay.so 00:16:12.055 CC module/bdev/null/bdev_null.o 00:16:12.055 CC module/bdev/null/bdev_null_rpc.o 00:16:12.055 CC module/bdev/malloc/bdev_malloc.o 00:16:12.055 SYMLINK libspdk_bdev_error.so 00:16:12.055 CC module/bdev/malloc/bdev_malloc_rpc.o 00:16:12.055 CC module/bdev/nvme/bdev_nvme.o 00:16:12.055 CC module/bdev/passthru/vbdev_passthru.o 00:16:12.055 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:16:12.055 CC module/bdev/nvme/bdev_nvme_rpc.o 00:16:12.313 LIB libspdk_bdev_gpt.a 00:16:12.313 SO libspdk_bdev_gpt.so.6.0 00:16:12.313 SYMLINK libspdk_bdev_gpt.so 00:16:12.313 LIB libspdk_bdev_passthru.a 00:16:12.313 CC module/bdev/raid/bdev_raid.o 00:16:12.313 LIB libspdk_bdev_null.a 00:16:12.313 SO libspdk_bdev_passthru.so.6.0 00:16:12.313 LIB libspdk_bdev_malloc.a 00:16:12.313 SO libspdk_bdev_null.so.6.0 00:16:12.313 SO libspdk_bdev_malloc.so.6.0 00:16:12.313 SYMLINK libspdk_bdev_passthru.so 00:16:12.313 LIB libspdk_bdev_lvol.a 00:16:12.313 CC module/bdev/zone_block/vbdev_zone_block.o 00:16:12.313 CC module/bdev/split/vbdev_split.o 00:16:12.313 SYMLINK libspdk_bdev_null.so 00:16:12.313 SYMLINK libspdk_bdev_malloc.so 00:16:12.313 SO libspdk_bdev_lvol.so.6.0 00:16:12.313 CC module/bdev/uring/bdev_uring.o 00:16:12.572 SYMLINK libspdk_bdev_lvol.so 00:16:12.572 CC module/bdev/raid/bdev_raid_rpc.o 00:16:12.572 CC module/bdev/aio/bdev_aio.o 00:16:12.572 CC module/bdev/ftl/bdev_ftl.o 00:16:12.572 CC module/bdev/iscsi/bdev_iscsi.o 00:16:12.572 CC module/bdev/split/vbdev_split_rpc.o 00:16:12.572 CC module/bdev/aio/bdev_aio_rpc.o 00:16:12.572 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:16:12.830 CC module/bdev/uring/bdev_uring_rpc.o 00:16:12.830 LIB libspdk_bdev_split.a 00:16:12.830 SO libspdk_bdev_split.so.6.0 00:16:12.830 CC module/bdev/ftl/bdev_ftl_rpc.o 00:16:12.830 CC module/bdev/virtio/bdev_virtio_scsi.o 00:16:12.830 CC module/bdev/virtio/bdev_virtio_blk.o 00:16:12.830 LIB libspdk_bdev_aio.a 00:16:12.830 LIB libspdk_bdev_zone_block.a 00:16:12.830 SYMLINK libspdk_bdev_split.so 00:16:12.830 CC module/bdev/raid/bdev_raid_sb.o 00:16:12.830 SO libspdk_bdev_aio.so.6.0 00:16:12.830 SO libspdk_bdev_zone_block.so.6.0 00:16:12.830 LIB libspdk_bdev_uring.a 00:16:12.830 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:16:12.830 SO libspdk_bdev_uring.so.6.0 00:16:12.830 SYMLINK libspdk_bdev_aio.so 00:16:12.830 SYMLINK libspdk_bdev_zone_block.so 00:16:12.830 CC module/bdev/virtio/bdev_virtio_rpc.o 00:16:12.830 CC module/bdev/nvme/nvme_rpc.o 00:16:13.089 SYMLINK libspdk_bdev_uring.so 00:16:13.089 CC module/bdev/nvme/bdev_mdns_client.o 00:16:13.089 LIB libspdk_bdev_ftl.a 00:16:13.089 LIB libspdk_bdev_iscsi.a 00:16:13.089 SO libspdk_bdev_ftl.so.6.0 00:16:13.089 SO libspdk_bdev_iscsi.so.6.0 00:16:13.089 SYMLINK libspdk_bdev_ftl.so 00:16:13.089 CC module/bdev/raid/raid0.o 00:16:13.089 SYMLINK libspdk_bdev_iscsi.so 00:16:13.089 CC module/bdev/raid/raid1.o 00:16:13.089 CC module/bdev/raid/concat.o 00:16:13.089 CC module/bdev/nvme/vbdev_opal.o 00:16:13.089 CC module/bdev/nvme/vbdev_opal_rpc.o 00:16:13.089 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:16:13.347 LIB libspdk_bdev_virtio.a 00:16:13.347 SO libspdk_bdev_virtio.so.6.0 00:16:13.347 LIB libspdk_bdev_raid.a 00:16:13.347 SYMLINK libspdk_bdev_virtio.so 00:16:13.347 SO libspdk_bdev_raid.so.6.0 00:16:13.347 SYMLINK libspdk_bdev_raid.so 00:16:14.279 LIB libspdk_bdev_nvme.a 00:16:14.279 SO libspdk_bdev_nvme.so.7.1 00:16:14.279 SYMLINK libspdk_bdev_nvme.so 00:16:14.622 CC module/event/subsystems/sock/sock.o 00:16:14.622 CC module/event/subsystems/vmd/vmd.o 00:16:14.622 CC module/event/subsystems/keyring/keyring.o 00:16:14.622 CC module/event/subsystems/vmd/vmd_rpc.o 00:16:14.622 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:16:14.622 CC module/event/subsystems/fsdev/fsdev.o 00:16:14.622 CC module/event/subsystems/iobuf/iobuf.o 00:16:14.622 CC module/event/subsystems/scheduler/scheduler.o 00:16:14.622 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:16:14.622 LIB libspdk_event_vhost_blk.a 00:16:14.622 LIB libspdk_event_fsdev.a 00:16:14.622 LIB libspdk_event_keyring.a 00:16:14.622 LIB libspdk_event_vmd.a 00:16:14.622 LIB libspdk_event_scheduler.a 00:16:14.622 LIB libspdk_event_sock.a 00:16:14.622 LIB libspdk_event_iobuf.a 00:16:14.622 SO libspdk_event_vhost_blk.so.3.0 00:16:14.622 SO libspdk_event_keyring.so.1.0 00:16:14.622 SO libspdk_event_fsdev.so.1.0 00:16:14.622 SO libspdk_event_scheduler.so.4.0 00:16:14.622 SO libspdk_event_vmd.so.6.0 00:16:14.622 SO libspdk_event_sock.so.5.0 00:16:14.622 SO libspdk_event_iobuf.so.3.0 00:16:14.881 SYMLINK libspdk_event_fsdev.so 00:16:14.881 SYMLINK libspdk_event_scheduler.so 00:16:14.881 SYMLINK libspdk_event_vhost_blk.so 00:16:14.881 SYMLINK libspdk_event_keyring.so 00:16:14.881 SYMLINK libspdk_event_sock.so 00:16:14.881 SYMLINK libspdk_event_vmd.so 00:16:14.881 SYMLINK libspdk_event_iobuf.so 00:16:14.881 CC module/event/subsystems/accel/accel.o 00:16:15.154 LIB libspdk_event_accel.a 00:16:15.154 SO libspdk_event_accel.so.6.0 00:16:15.154 SYMLINK libspdk_event_accel.so 00:16:15.413 CC module/event/subsystems/bdev/bdev.o 00:16:15.671 LIB libspdk_event_bdev.a 00:16:15.671 SO libspdk_event_bdev.so.6.0 00:16:15.671 SYMLINK libspdk_event_bdev.so 00:16:15.929 CC module/event/subsystems/ublk/ublk.o 00:16:15.929 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:16:15.929 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:16:15.929 CC module/event/subsystems/scsi/scsi.o 00:16:15.929 CC module/event/subsystems/nbd/nbd.o 00:16:15.929 LIB libspdk_event_ublk.a 00:16:15.929 LIB libspdk_event_scsi.a 00:16:15.929 SO libspdk_event_ublk.so.3.0 00:16:15.929 SO libspdk_event_scsi.so.6.0 00:16:15.929 LIB libspdk_event_nbd.a 00:16:15.929 SYMLINK libspdk_event_ublk.so 00:16:15.929 SYMLINK libspdk_event_scsi.so 00:16:15.929 SO libspdk_event_nbd.so.6.0 00:16:15.929 LIB libspdk_event_nvmf.a 00:16:16.187 SO libspdk_event_nvmf.so.6.0 00:16:16.187 SYMLINK libspdk_event_nbd.so 00:16:16.187 SYMLINK libspdk_event_nvmf.so 00:16:16.187 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:16:16.187 CC module/event/subsystems/iscsi/iscsi.o 00:16:16.187 LIB libspdk_event_vhost_scsi.a 00:16:16.445 SO libspdk_event_vhost_scsi.so.3.0 00:16:16.445 LIB libspdk_event_iscsi.a 00:16:16.445 SO libspdk_event_iscsi.so.6.0 00:16:16.445 SYMLINK libspdk_event_vhost_scsi.so 00:16:16.445 SYMLINK libspdk_event_iscsi.so 00:16:16.445 SO libspdk.so.6.0 00:16:16.445 SYMLINK libspdk.so 00:16:16.703 CC app/trace_record/trace_record.o 00:16:16.703 CXX app/trace/trace.o 00:16:16.703 CC app/spdk_nvme_identify/identify.o 00:16:16.703 CC app/spdk_nvme_perf/perf.o 00:16:16.703 CC app/spdk_lspci/spdk_lspci.o 00:16:16.703 CC app/nvmf_tgt/nvmf_main.o 00:16:16.703 CC app/iscsi_tgt/iscsi_tgt.o 00:16:16.703 CC app/spdk_tgt/spdk_tgt.o 00:16:16.703 CC examples/util/zipf/zipf.o 00:16:16.703 CC test/thread/poller_perf/poller_perf.o 00:16:16.961 LINK spdk_lspci 00:16:16.961 LINK spdk_trace_record 00:16:16.961 LINK nvmf_tgt 00:16:16.961 LINK zipf 00:16:16.961 LINK poller_perf 00:16:16.961 LINK iscsi_tgt 00:16:16.961 LINK spdk_tgt 00:16:16.961 LINK spdk_trace 00:16:16.961 CC app/spdk_nvme_discover/discovery_aer.o 00:16:17.219 CC app/spdk_top/spdk_top.o 00:16:17.219 CC test/dma/test_dma/test_dma.o 00:16:17.219 CC examples/ioat/perf/perf.o 00:16:17.219 LINK spdk_nvme_discover 00:16:17.219 CC examples/ioat/verify/verify.o 00:16:17.219 CC examples/vmd/lsvmd/lsvmd.o 00:16:17.219 CC examples/idxd/perf/perf.o 00:16:17.219 CC test/app/bdev_svc/bdev_svc.o 00:16:17.219 LINK spdk_nvme_identify 00:16:17.476 LINK ioat_perf 00:16:17.476 LINK lsvmd 00:16:17.476 LINK verify 00:16:17.476 LINK spdk_nvme_perf 00:16:17.476 LINK bdev_svc 00:16:17.476 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:16:17.476 CC test/app/histogram_perf/histogram_perf.o 00:16:17.476 TEST_HEADER include/spdk/accel.h 00:16:17.476 TEST_HEADER include/spdk/accel_module.h 00:16:17.476 LINK idxd_perf 00:16:17.476 TEST_HEADER include/spdk/assert.h 00:16:17.476 TEST_HEADER include/spdk/barrier.h 00:16:17.476 TEST_HEADER include/spdk/base64.h 00:16:17.476 TEST_HEADER include/spdk/bdev.h 00:16:17.476 TEST_HEADER include/spdk/bdev_module.h 00:16:17.476 TEST_HEADER include/spdk/bdev_zone.h 00:16:17.476 TEST_HEADER include/spdk/bit_array.h 00:16:17.476 CC examples/vmd/led/led.o 00:16:17.476 TEST_HEADER include/spdk/bit_pool.h 00:16:17.476 TEST_HEADER include/spdk/blob_bdev.h 00:16:17.772 TEST_HEADER include/spdk/blobfs_bdev.h 00:16:17.772 TEST_HEADER include/spdk/blobfs.h 00:16:17.772 TEST_HEADER include/spdk/blob.h 00:16:17.772 TEST_HEADER include/spdk/conf.h 00:16:17.772 TEST_HEADER include/spdk/config.h 00:16:17.772 TEST_HEADER include/spdk/cpuset.h 00:16:17.772 TEST_HEADER include/spdk/crc16.h 00:16:17.772 TEST_HEADER include/spdk/crc32.h 00:16:17.772 TEST_HEADER include/spdk/crc64.h 00:16:17.772 TEST_HEADER include/spdk/dif.h 00:16:17.772 TEST_HEADER include/spdk/dma.h 00:16:17.772 TEST_HEADER include/spdk/endian.h 00:16:17.772 TEST_HEADER include/spdk/env_dpdk.h 00:16:17.772 TEST_HEADER include/spdk/env.h 00:16:17.772 TEST_HEADER include/spdk/event.h 00:16:17.772 LINK test_dma 00:16:17.772 TEST_HEADER include/spdk/fd_group.h 00:16:17.772 TEST_HEADER include/spdk/fd.h 00:16:17.772 TEST_HEADER include/spdk/file.h 00:16:17.772 TEST_HEADER include/spdk/fsdev.h 00:16:17.772 TEST_HEADER include/spdk/fsdev_module.h 00:16:17.772 TEST_HEADER include/spdk/ftl.h 00:16:17.772 TEST_HEADER include/spdk/fuse_dispatcher.h 00:16:17.772 TEST_HEADER include/spdk/gpt_spec.h 00:16:17.772 TEST_HEADER include/spdk/hexlify.h 00:16:17.772 TEST_HEADER include/spdk/histogram_data.h 00:16:17.772 TEST_HEADER include/spdk/idxd.h 00:16:17.772 TEST_HEADER include/spdk/idxd_spec.h 00:16:17.772 TEST_HEADER include/spdk/init.h 00:16:17.772 TEST_HEADER include/spdk/ioat.h 00:16:17.772 TEST_HEADER include/spdk/ioat_spec.h 00:16:17.772 TEST_HEADER include/spdk/iscsi_spec.h 00:16:17.772 TEST_HEADER include/spdk/json.h 00:16:17.772 TEST_HEADER include/spdk/jsonrpc.h 00:16:17.772 TEST_HEADER include/spdk/keyring.h 00:16:17.772 TEST_HEADER include/spdk/keyring_module.h 00:16:17.772 LINK histogram_perf 00:16:17.772 TEST_HEADER include/spdk/likely.h 00:16:17.772 TEST_HEADER include/spdk/log.h 00:16:17.772 TEST_HEADER include/spdk/lvol.h 00:16:17.772 TEST_HEADER include/spdk/md5.h 00:16:17.772 TEST_HEADER include/spdk/memory.h 00:16:17.772 TEST_HEADER include/spdk/mmio.h 00:16:17.772 TEST_HEADER include/spdk/nbd.h 00:16:17.772 TEST_HEADER include/spdk/net.h 00:16:17.772 TEST_HEADER include/spdk/notify.h 00:16:17.772 TEST_HEADER include/spdk/nvme.h 00:16:17.772 TEST_HEADER include/spdk/nvme_intel.h 00:16:17.772 TEST_HEADER include/spdk/nvme_ocssd.h 00:16:17.772 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:16:17.772 TEST_HEADER include/spdk/nvme_spec.h 00:16:17.772 TEST_HEADER include/spdk/nvme_zns.h 00:16:17.772 TEST_HEADER include/spdk/nvmf_cmd.h 00:16:17.772 CC test/event/event_perf/event_perf.o 00:16:17.772 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:16:17.772 TEST_HEADER include/spdk/nvmf.h 00:16:17.772 TEST_HEADER include/spdk/nvmf_spec.h 00:16:17.772 TEST_HEADER include/spdk/nvmf_transport.h 00:16:17.772 TEST_HEADER include/spdk/opal.h 00:16:17.772 TEST_HEADER include/spdk/opal_spec.h 00:16:17.772 TEST_HEADER include/spdk/pci_ids.h 00:16:17.772 CC test/event/reactor/reactor.o 00:16:17.772 TEST_HEADER include/spdk/pipe.h 00:16:17.772 TEST_HEADER include/spdk/queue.h 00:16:17.772 TEST_HEADER include/spdk/reduce.h 00:16:17.772 TEST_HEADER include/spdk/rpc.h 00:16:17.772 TEST_HEADER include/spdk/scheduler.h 00:16:17.772 TEST_HEADER include/spdk/scsi.h 00:16:17.772 TEST_HEADER include/spdk/scsi_spec.h 00:16:17.772 TEST_HEADER include/spdk/sock.h 00:16:17.772 TEST_HEADER include/spdk/stdinc.h 00:16:17.772 TEST_HEADER include/spdk/string.h 00:16:17.772 TEST_HEADER include/spdk/thread.h 00:16:17.772 TEST_HEADER include/spdk/trace.h 00:16:17.772 TEST_HEADER include/spdk/trace_parser.h 00:16:17.772 TEST_HEADER include/spdk/tree.h 00:16:17.772 TEST_HEADER include/spdk/ublk.h 00:16:17.772 TEST_HEADER include/spdk/util.h 00:16:17.772 TEST_HEADER include/spdk/uuid.h 00:16:17.772 TEST_HEADER include/spdk/version.h 00:16:17.772 TEST_HEADER include/spdk/vfio_user_pci.h 00:16:17.772 LINK led 00:16:17.772 TEST_HEADER include/spdk/vfio_user_spec.h 00:16:17.772 TEST_HEADER include/spdk/vhost.h 00:16:17.772 CC test/env/mem_callbacks/mem_callbacks.o 00:16:17.772 TEST_HEADER include/spdk/vmd.h 00:16:17.772 TEST_HEADER include/spdk/xor.h 00:16:17.772 TEST_HEADER include/spdk/zipf.h 00:16:17.772 CXX test/cpp_headers/accel.o 00:16:17.772 LINK spdk_top 00:16:17.772 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:16:17.772 CXX test/cpp_headers/accel_module.o 00:16:17.772 CXX test/cpp_headers/assert.o 00:16:17.772 LINK reactor 00:16:17.772 LINK event_perf 00:16:17.772 LINK nvme_fuzz 00:16:18.032 CXX test/cpp_headers/barrier.o 00:16:18.032 CC test/rpc_client/rpc_client_test.o 00:16:18.032 CC app/spdk_dd/spdk_dd.o 00:16:18.032 CC examples/interrupt_tgt/interrupt_tgt.o 00:16:18.032 CC test/event/reactor_perf/reactor_perf.o 00:16:18.032 CC test/env/vtophys/vtophys.o 00:16:18.032 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:16:18.032 CXX test/cpp_headers/base64.o 00:16:18.032 CC test/env/memory/memory_ut.o 00:16:18.032 LINK reactor_perf 00:16:18.032 LINK rpc_client_test 00:16:18.032 LINK env_dpdk_post_init 00:16:18.032 LINK vtophys 00:16:18.032 LINK interrupt_tgt 00:16:18.291 CXX test/cpp_headers/bdev.o 00:16:18.291 LINK mem_callbacks 00:16:18.291 CXX test/cpp_headers/bdev_module.o 00:16:18.291 CXX test/cpp_headers/bdev_zone.o 00:16:18.291 CC test/event/app_repeat/app_repeat.o 00:16:18.291 CC test/env/pci/pci_ut.o 00:16:18.291 LINK spdk_dd 00:16:18.291 CXX test/cpp_headers/bit_array.o 00:16:18.291 LINK app_repeat 00:16:18.551 CC examples/thread/thread/thread_ex.o 00:16:18.551 CC test/app/jsoncat/jsoncat.o 00:16:18.551 CC test/accel/dif/dif.o 00:16:18.551 CC test/blobfs/mkfs/mkfs.o 00:16:18.551 CXX test/cpp_headers/bit_pool.o 00:16:18.551 LINK jsoncat 00:16:18.551 CC app/fio/nvme/fio_plugin.o 00:16:18.551 LINK thread 00:16:18.551 CC test/event/scheduler/scheduler.o 00:16:18.551 LINK pci_ut 00:16:18.551 CXX test/cpp_headers/blob_bdev.o 00:16:18.551 LINK mkfs 00:16:18.812 CC app/fio/bdev/fio_plugin.o 00:16:18.812 CXX test/cpp_headers/blobfs_bdev.o 00:16:18.812 LINK scheduler 00:16:18.812 CXX test/cpp_headers/blobfs.o 00:16:18.812 CC test/app/stub/stub.o 00:16:18.812 CC examples/sock/hello_world/hello_sock.o 00:16:19.073 CXX test/cpp_headers/blob.o 00:16:19.073 LINK memory_ut 00:16:19.073 LINK dif 00:16:19.073 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:16:19.073 LINK stub 00:16:19.073 LINK spdk_nvme 00:16:19.073 LINK iscsi_fuzz 00:16:19.073 CXX test/cpp_headers/conf.o 00:16:19.073 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:16:19.073 LINK hello_sock 00:16:19.073 LINK spdk_bdev 00:16:19.073 CXX test/cpp_headers/config.o 00:16:19.073 CXX test/cpp_headers/cpuset.o 00:16:19.073 CXX test/cpp_headers/crc16.o 00:16:19.073 CXX test/cpp_headers/crc32.o 00:16:19.073 CXX test/cpp_headers/crc64.o 00:16:19.073 CC test/lvol/esnap/esnap.o 00:16:19.331 CXX test/cpp_headers/dif.o 00:16:19.331 CXX test/cpp_headers/dma.o 00:16:19.331 CXX test/cpp_headers/endian.o 00:16:19.331 CXX test/cpp_headers/env_dpdk.o 00:16:19.331 CXX test/cpp_headers/env.o 00:16:19.331 CXX test/cpp_headers/event.o 00:16:19.331 CC app/vhost/vhost.o 00:16:19.331 CXX test/cpp_headers/fd_group.o 00:16:19.331 LINK vhost_fuzz 00:16:19.331 CXX test/cpp_headers/fd.o 00:16:19.331 CC examples/fsdev/hello_world/hello_fsdev.o 00:16:19.331 CXX test/cpp_headers/file.o 00:16:19.590 CC examples/accel/perf/accel_perf.o 00:16:19.590 CC test/nvme/aer/aer.o 00:16:19.590 LINK vhost 00:16:19.590 CXX test/cpp_headers/fsdev.o 00:16:19.590 CC test/nvme/reset/reset.o 00:16:19.590 CC test/nvme/sgl/sgl.o 00:16:19.590 CC test/nvme/e2edp/nvme_dp.o 00:16:19.590 LINK hello_fsdev 00:16:19.590 CC test/nvme/overhead/overhead.o 00:16:19.590 CXX test/cpp_headers/fsdev_module.o 00:16:19.848 LINK aer 00:16:19.848 CC test/nvme/err_injection/err_injection.o 00:16:19.848 LINK reset 00:16:19.848 LINK sgl 00:16:19.848 CXX test/cpp_headers/ftl.o 00:16:19.848 LINK overhead 00:16:19.848 CC test/nvme/startup/startup.o 00:16:19.848 LINK nvme_dp 00:16:19.848 LINK accel_perf 00:16:19.848 CXX test/cpp_headers/fuse_dispatcher.o 00:16:19.848 LINK err_injection 00:16:20.106 LINK startup 00:16:20.106 CXX test/cpp_headers/gpt_spec.o 00:16:20.106 CC examples/blob/hello_world/hello_blob.o 00:16:20.106 CC examples/nvme/hello_world/hello_world.o 00:16:20.106 CC examples/nvme/reconnect/reconnect.o 00:16:20.106 CC examples/nvme/nvme_manage/nvme_manage.o 00:16:20.106 CC examples/blob/cli/blobcli.o 00:16:20.106 CXX test/cpp_headers/hexlify.o 00:16:20.106 CC examples/bdev/hello_world/hello_bdev.o 00:16:20.106 CC test/nvme/reserve/reserve.o 00:16:20.106 CXX test/cpp_headers/histogram_data.o 00:16:20.106 LINK hello_blob 00:16:20.106 CC test/bdev/bdevio/bdevio.o 00:16:20.106 LINK hello_world 00:16:20.363 CXX test/cpp_headers/idxd.o 00:16:20.363 LINK reconnect 00:16:20.363 CXX test/cpp_headers/idxd_spec.o 00:16:20.363 LINK reserve 00:16:20.363 LINK hello_bdev 00:16:20.363 CXX test/cpp_headers/init.o 00:16:20.363 LINK nvme_manage 00:16:20.363 LINK blobcli 00:16:20.621 CXX test/cpp_headers/ioat.o 00:16:20.621 CC examples/nvme/arbitration/arbitration.o 00:16:20.621 CC examples/nvme/hotplug/hotplug.o 00:16:20.621 CC examples/bdev/bdevperf/bdevperf.o 00:16:20.621 LINK bdevio 00:16:20.621 CC test/nvme/simple_copy/simple_copy.o 00:16:20.621 CC test/nvme/connect_stress/connect_stress.o 00:16:20.621 CC examples/nvme/cmb_copy/cmb_copy.o 00:16:20.621 CXX test/cpp_headers/ioat_spec.o 00:16:20.621 CXX test/cpp_headers/iscsi_spec.o 00:16:20.621 CC examples/nvme/abort/abort.o 00:16:20.621 LINK simple_copy 00:16:20.621 LINK connect_stress 00:16:20.621 LINK hotplug 00:16:20.878 LINK cmb_copy 00:16:20.878 LINK arbitration 00:16:20.878 CXX test/cpp_headers/json.o 00:16:20.878 CXX test/cpp_headers/jsonrpc.o 00:16:20.878 CC test/nvme/boot_partition/boot_partition.o 00:16:20.878 CXX test/cpp_headers/keyring.o 00:16:20.878 CC test/nvme/compliance/nvme_compliance.o 00:16:20.878 CXX test/cpp_headers/keyring_module.o 00:16:20.878 CC test/nvme/fused_ordering/fused_ordering.o 00:16:20.878 LINK boot_partition 00:16:20.878 CXX test/cpp_headers/likely.o 00:16:20.878 LINK abort 00:16:20.878 CC test/nvme/doorbell_aers/doorbell_aers.o 00:16:21.191 CXX test/cpp_headers/log.o 00:16:21.191 CC test/nvme/fdp/fdp.o 00:16:21.191 CXX test/cpp_headers/lvol.o 00:16:21.191 LINK nvme_compliance 00:16:21.191 CXX test/cpp_headers/md5.o 00:16:21.191 LINK fused_ordering 00:16:21.191 LINK doorbell_aers 00:16:21.191 CXX test/cpp_headers/memory.o 00:16:21.191 LINK bdevperf 00:16:21.191 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:16:21.191 CXX test/cpp_headers/mmio.o 00:16:21.191 CXX test/cpp_headers/nbd.o 00:16:21.191 CC test/nvme/cuse/cuse.o 00:16:21.191 LINK fdp 00:16:21.191 CXX test/cpp_headers/net.o 00:16:21.191 CXX test/cpp_headers/notify.o 00:16:21.191 CXX test/cpp_headers/nvme.o 00:16:21.449 CXX test/cpp_headers/nvme_intel.o 00:16:21.449 CXX test/cpp_headers/nvme_ocssd.o 00:16:21.449 LINK pmr_persistence 00:16:21.449 CXX test/cpp_headers/nvme_ocssd_spec.o 00:16:21.449 CXX test/cpp_headers/nvme_spec.o 00:16:21.449 CXX test/cpp_headers/nvme_zns.o 00:16:21.449 CXX test/cpp_headers/nvmf_cmd.o 00:16:21.449 CXX test/cpp_headers/nvmf_fc_spec.o 00:16:21.449 CXX test/cpp_headers/nvmf.o 00:16:21.449 CXX test/cpp_headers/nvmf_spec.o 00:16:21.449 CXX test/cpp_headers/nvmf_transport.o 00:16:21.449 CXX test/cpp_headers/opal.o 00:16:21.449 CXX test/cpp_headers/opal_spec.o 00:16:21.449 CXX test/cpp_headers/pci_ids.o 00:16:21.706 CXX test/cpp_headers/pipe.o 00:16:21.706 CXX test/cpp_headers/queue.o 00:16:21.706 CXX test/cpp_headers/reduce.o 00:16:21.706 CXX test/cpp_headers/rpc.o 00:16:21.706 CC examples/nvmf/nvmf/nvmf.o 00:16:21.706 CXX test/cpp_headers/scheduler.o 00:16:21.706 CXX test/cpp_headers/scsi.o 00:16:21.706 CXX test/cpp_headers/scsi_spec.o 00:16:21.706 CXX test/cpp_headers/sock.o 00:16:21.706 CXX test/cpp_headers/stdinc.o 00:16:21.706 CXX test/cpp_headers/string.o 00:16:21.706 CXX test/cpp_headers/thread.o 00:16:21.706 CXX test/cpp_headers/trace.o 00:16:21.706 CXX test/cpp_headers/trace_parser.o 00:16:21.706 CXX test/cpp_headers/tree.o 00:16:21.706 CXX test/cpp_headers/ublk.o 00:16:21.706 CXX test/cpp_headers/util.o 00:16:21.964 CXX test/cpp_headers/uuid.o 00:16:21.964 CXX test/cpp_headers/version.o 00:16:21.964 CXX test/cpp_headers/vfio_user_pci.o 00:16:21.964 CXX test/cpp_headers/vfio_user_spec.o 00:16:21.964 LINK nvmf 00:16:21.964 CXX test/cpp_headers/vhost.o 00:16:21.964 CXX test/cpp_headers/vmd.o 00:16:21.964 CXX test/cpp_headers/xor.o 00:16:21.964 CXX test/cpp_headers/zipf.o 00:16:21.964 LINK cuse 00:16:23.337 LINK esnap 00:16:23.337 00:16:23.337 real 1m4.260s 00:16:23.337 user 5m55.263s 00:16:23.337 sys 1m4.061s 00:16:23.337 07:13:47 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:16:23.337 07:13:47 make -- common/autotest_common.sh@10 -- $ set +x 00:16:23.337 ************************************ 00:16:23.337 END TEST make 00:16:23.337 ************************************ 00:16:23.595 07:13:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:16:23.595 07:13:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:16:23.595 07:13:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:16:23.595 07:13:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:23.595 07:13:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:16:23.595 07:13:47 -- pm/common@44 -- $ pid=5029 00:16:23.595 07:13:47 -- pm/common@50 -- $ kill -TERM 5029 00:16:23.595 07:13:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:23.595 07:13:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:16:23.595 07:13:47 -- pm/common@44 -- $ pid=5030 00:16:23.595 07:13:47 -- pm/common@50 -- $ kill -TERM 5030 00:16:23.595 07:13:47 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:16:23.595 07:13:47 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:16:23.595 07:13:47 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:23.595 07:13:47 -- common/autotest_common.sh@1693 -- # lcov --version 00:16:23.595 07:13:47 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:23.595 07:13:47 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:23.595 07:13:47 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.595 07:13:47 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.595 07:13:47 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.595 07:13:47 -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.595 07:13:47 -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.595 07:13:47 -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.595 07:13:47 -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.595 07:13:47 -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.595 07:13:47 -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.595 07:13:47 -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.595 07:13:47 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.595 07:13:47 -- scripts/common.sh@344 -- # case "$op" in 00:16:23.595 07:13:47 -- scripts/common.sh@345 -- # : 1 00:16:23.595 07:13:47 -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.595 07:13:47 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.595 07:13:47 -- scripts/common.sh@365 -- # decimal 1 00:16:23.595 07:13:47 -- scripts/common.sh@353 -- # local d=1 00:16:23.595 07:13:47 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.595 07:13:47 -- scripts/common.sh@355 -- # echo 1 00:16:23.595 07:13:47 -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.595 07:13:47 -- scripts/common.sh@366 -- # decimal 2 00:16:23.595 07:13:47 -- scripts/common.sh@353 -- # local d=2 00:16:23.595 07:13:47 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.595 07:13:47 -- scripts/common.sh@355 -- # echo 2 00:16:23.595 07:13:47 -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.595 07:13:47 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.595 07:13:47 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.595 07:13:47 -- scripts/common.sh@368 -- # return 0 00:16:23.595 07:13:47 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.595 07:13:47 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:23.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.595 --rc genhtml_branch_coverage=1 00:16:23.595 --rc genhtml_function_coverage=1 00:16:23.595 --rc genhtml_legend=1 00:16:23.595 --rc geninfo_all_blocks=1 00:16:23.595 --rc geninfo_unexecuted_blocks=1 00:16:23.595 00:16:23.595 ' 00:16:23.596 07:13:47 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:23.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.596 --rc genhtml_branch_coverage=1 00:16:23.596 --rc genhtml_function_coverage=1 00:16:23.596 --rc genhtml_legend=1 00:16:23.596 --rc geninfo_all_blocks=1 00:16:23.596 --rc geninfo_unexecuted_blocks=1 00:16:23.596 00:16:23.596 ' 00:16:23.596 07:13:47 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:23.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.596 --rc genhtml_branch_coverage=1 00:16:23.596 --rc genhtml_function_coverage=1 00:16:23.596 --rc genhtml_legend=1 00:16:23.596 --rc geninfo_all_blocks=1 00:16:23.596 --rc geninfo_unexecuted_blocks=1 00:16:23.596 00:16:23.596 ' 00:16:23.596 07:13:47 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:23.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.596 --rc genhtml_branch_coverage=1 00:16:23.596 --rc genhtml_function_coverage=1 00:16:23.596 --rc genhtml_legend=1 00:16:23.596 --rc geninfo_all_blocks=1 00:16:23.596 --rc geninfo_unexecuted_blocks=1 00:16:23.596 00:16:23.596 ' 00:16:23.596 07:13:47 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.596 07:13:47 -- nvmf/common.sh@7 -- # uname -s 00:16:23.596 07:13:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.596 07:13:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.596 07:13:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.596 07:13:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.596 07:13:47 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.596 07:13:47 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:23.596 07:13:47 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.596 07:13:47 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:23.596 07:13:47 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:16:23.596 07:13:47 -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:16:23.596 07:13:47 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.596 07:13:47 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:23.596 07:13:47 -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:16:23.596 07:13:47 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.596 07:13:47 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.596 07:13:47 -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.596 07:13:47 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.596 07:13:47 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.596 07:13:47 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.596 07:13:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.596 07:13:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.596 07:13:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.596 07:13:47 -- paths/export.sh@5 -- # export PATH 00:16:23.596 07:13:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.596 07:13:47 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:16:23.596 07:13:47 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:23.596 07:13:47 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:23.596 07:13:47 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:23.596 07:13:47 -- nvmf/common.sh@50 -- # : 0 00:16:23.596 07:13:47 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:23.596 07:13:47 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:23.596 07:13:47 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:23.596 07:13:47 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.596 07:13:47 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.596 07:13:47 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:23.596 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:23.596 07:13:47 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:23.596 07:13:47 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:23.596 07:13:47 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:23.596 07:13:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:16:23.596 07:13:47 -- spdk/autotest.sh@32 -- # uname -s 00:16:23.596 07:13:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:16:23.596 07:13:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:16:23.596 07:13:47 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:16:23.596 07:13:47 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:16:23.596 07:13:47 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:16:23.596 07:13:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:16:23.596 07:13:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:16:23.596 07:13:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:16:23.596 07:13:47 -- spdk/autotest.sh@48 -- # udevadm_pid=53801 00:16:23.596 07:13:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:16:23.596 07:13:47 -- pm/common@17 -- # local monitor 00:16:23.596 07:13:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:16:23.596 07:13:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:16:23.596 07:13:47 -- pm/common@25 -- # sleep 1 00:16:23.596 07:13:47 -- pm/common@21 -- # date +%s 00:16:23.596 07:13:47 -- pm/common@21 -- # date +%s 00:16:23.596 07:13:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:16:23.596 07:13:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732086827 00:16:23.596 07:13:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732086827 00:16:23.596 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732086827_collect-vmstat.pm.log 00:16:23.854 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732086827_collect-cpu-load.pm.log 00:16:24.788 07:13:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:16:24.788 07:13:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:16:24.788 07:13:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:24.788 07:13:48 -- common/autotest_common.sh@10 -- # set +x 00:16:24.788 07:13:48 -- spdk/autotest.sh@59 -- # create_test_list 00:16:24.788 07:13:48 -- common/autotest_common.sh@752 -- # xtrace_disable 00:16:24.788 07:13:48 -- common/autotest_common.sh@10 -- # set +x 00:16:24.788 07:13:48 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:16:24.788 07:13:48 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:16:24.788 07:13:48 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:16:24.788 07:13:48 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:16:24.788 07:13:48 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:16:24.788 07:13:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:16:24.788 07:13:48 -- common/autotest_common.sh@1457 -- # uname 00:16:24.788 07:13:48 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:16:24.788 07:13:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:16:24.788 07:13:48 -- common/autotest_common.sh@1477 -- # uname 00:16:24.788 07:13:48 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:16:24.788 07:13:48 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:16:24.788 07:13:48 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:16:24.788 lcov: LCOV version 1.15 00:16:24.788 07:13:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:16:39.656 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:16:39.656 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:16:54.522 07:14:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:16:54.522 07:14:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:54.522 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:16:54.522 07:14:17 -- spdk/autotest.sh@78 -- # rm -f 00:16:54.522 07:14:17 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:54.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:54.522 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:54.522 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:54.522 07:14:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:16:54.522 07:14:17 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:54.522 07:14:17 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:54.522 07:14:17 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:16:54.522 07:14:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:54.522 07:14:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:16:54.522 07:14:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:54.522 07:14:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:54.522 07:14:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.522 07:14:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:54.522 07:14:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:16:54.522 07:14:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:54.522 07:14:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:54.522 07:14:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.522 07:14:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:54.522 07:14:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:16:54.522 07:14:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:16:54.522 07:14:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:16:54.522 07:14:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.522 07:14:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:54.522 07:14:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:16:54.522 07:14:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:16:54.522 07:14:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:16:54.522 07:14:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.522 07:14:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:16:54.522 07:14:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:54.522 07:14:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:54.522 07:14:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:16:54.522 07:14:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:54.522 07:14:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:54.522 No valid GPT data, bailing 00:16:54.522 07:14:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:54.522 07:14:17 -- scripts/common.sh@394 -- # pt= 00:16:54.522 07:14:17 -- scripts/common.sh@395 -- # return 1 00:16:54.522 07:14:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:16:54.522 1+0 records in 00:16:54.522 1+0 records out 00:16:54.522 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00409244 s, 256 MB/s 00:16:54.522 07:14:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:54.522 07:14:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:54.522 07:14:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:16:54.522 07:14:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:16:54.522 07:14:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:16:54.522 No valid GPT data, bailing 00:16:54.523 07:14:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:54.523 07:14:17 -- scripts/common.sh@394 -- # pt= 00:16:54.523 07:14:17 -- scripts/common.sh@395 -- # return 1 00:16:54.523 07:14:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:16:54.523 1+0 records in 00:16:54.523 1+0 records out 00:16:54.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00343512 s, 305 MB/s 00:16:54.523 07:14:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:54.523 07:14:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:54.523 07:14:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:16:54.523 07:14:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:16:54.523 07:14:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:16:54.523 No valid GPT data, bailing 00:16:54.523 07:14:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:16:54.523 07:14:18 -- scripts/common.sh@394 -- # pt= 00:16:54.523 07:14:18 -- scripts/common.sh@395 -- # return 1 00:16:54.523 07:14:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:16:54.523 1+0 records in 00:16:54.523 1+0 records out 00:16:54.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00407519 s, 257 MB/s 00:16:54.523 07:14:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:54.523 07:14:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:54.523 07:14:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:16:54.523 07:14:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:16:54.523 07:14:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:16:54.523 No valid GPT data, bailing 00:16:54.523 07:14:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:16:54.523 07:14:18 -- scripts/common.sh@394 -- # pt= 00:16:54.523 07:14:18 -- scripts/common.sh@395 -- # return 1 00:16:54.523 07:14:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:16:54.523 1+0 records in 00:16:54.523 1+0 records out 00:16:54.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00319138 s, 329 MB/s 00:16:54.523 07:14:18 -- spdk/autotest.sh@105 -- # sync 00:16:54.523 07:14:18 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:16:54.523 07:14:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:16:54.523 07:14:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:16:55.906 07:14:19 -- spdk/autotest.sh@111 -- # uname -s 00:16:55.906 07:14:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:16:55.906 07:14:19 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:16:55.906 07:14:19 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:16:56.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:56.163 Hugepages 00:16:56.163 node hugesize free / total 00:16:56.163 node0 1048576kB 0 / 0 00:16:56.163 node0 2048kB 0 / 0 00:16:56.163 00:16:56.163 Type BDF Vendor Device NUMA Driver Device Block devices 00:16:56.163 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:16:56.423 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:16:56.423 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:16:56.423 07:14:20 -- spdk/autotest.sh@117 -- # uname -s 00:16:56.423 07:14:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:16:56.423 07:14:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:16:56.423 07:14:20 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:56.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:56.991 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:56.991 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:57.252 07:14:21 -- common/autotest_common.sh@1517 -- # sleep 1 00:16:58.190 07:14:22 -- common/autotest_common.sh@1518 -- # bdfs=() 00:16:58.190 07:14:22 -- common/autotest_common.sh@1518 -- # local bdfs 00:16:58.190 07:14:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:16:58.190 07:14:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:16:58.190 07:14:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:58.190 07:14:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:16:58.190 07:14:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:58.190 07:14:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:58.190 07:14:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:58.190 07:14:22 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:16:58.190 07:14:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:16:58.190 07:14:22 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:58.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:58.448 Waiting for block devices as requested 00:16:58.448 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:58.707 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:58.707 07:14:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:16:58.707 07:14:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:16:58.707 07:14:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:16:58.707 07:14:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:16:58.707 07:14:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:16:58.707 07:14:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:16:58.707 07:14:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:16:58.707 07:14:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:16:58.707 07:14:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:16:58.707 07:14:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:16:58.707 07:14:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:16:58.707 07:14:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:16:58.707 07:14:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:16:58.707 07:14:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:16:58.707 07:14:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:16:58.707 07:14:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:16:58.707 07:14:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:16:58.707 07:14:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:16:58.707 07:14:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:16:58.707 07:14:22 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:16:58.707 07:14:22 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:16:58.707 07:14:22 -- common/autotest_common.sh@1543 -- # continue 00:16:58.707 07:14:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:16:58.707 07:14:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:16:58.707 07:14:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:16:58.707 07:14:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:16:58.707 07:14:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:16:58.707 07:14:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:16:58.707 07:14:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:16:58.707 07:14:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:16:58.707 07:14:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:16:58.707 07:14:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:16:58.707 07:14:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:16:58.707 07:14:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:16:58.707 07:14:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:16:58.707 07:14:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:16:58.707 07:14:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:16:58.707 07:14:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:16:58.707 07:14:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:16:58.707 07:14:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:16:58.707 07:14:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:16:58.707 07:14:22 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:16:58.707 07:14:22 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:16:58.707 07:14:22 -- common/autotest_common.sh@1543 -- # continue 00:16:58.707 07:14:22 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:16:58.707 07:14:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:58.707 07:14:22 -- common/autotest_common.sh@10 -- # set +x 00:16:58.707 07:14:22 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:16:58.707 07:14:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:58.707 07:14:22 -- common/autotest_common.sh@10 -- # set +x 00:16:58.707 07:14:22 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:59.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:59.274 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:59.274 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:59.532 07:14:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:16:59.532 07:14:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:59.532 07:14:23 -- common/autotest_common.sh@10 -- # set +x 00:16:59.532 07:14:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:16:59.532 07:14:23 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:16:59.532 07:14:23 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:16:59.532 07:14:23 -- common/autotest_common.sh@1563 -- # bdfs=() 00:16:59.532 07:14:23 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:16:59.532 07:14:23 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:16:59.532 07:14:23 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:16:59.532 07:14:23 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:16:59.532 07:14:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:59.532 07:14:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:16:59.532 07:14:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:59.532 07:14:23 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:59.532 07:14:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:59.532 07:14:23 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:16:59.532 07:14:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:16:59.532 07:14:23 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:16:59.532 07:14:23 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:16:59.532 07:14:23 -- common/autotest_common.sh@1566 -- # device=0x0010 00:16:59.532 07:14:23 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:59.532 07:14:23 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:16:59.532 07:14:23 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:16:59.532 07:14:23 -- common/autotest_common.sh@1566 -- # device=0x0010 00:16:59.532 07:14:23 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:59.532 07:14:23 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:16:59.532 07:14:23 -- common/autotest_common.sh@1572 -- # return 0 00:16:59.532 07:14:23 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:16:59.532 07:14:23 -- common/autotest_common.sh@1580 -- # return 0 00:16:59.532 07:14:23 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:16:59.532 07:14:23 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:16:59.532 07:14:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:16:59.532 07:14:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:16:59.532 07:14:23 -- spdk/autotest.sh@149 -- # timing_enter lib 00:16:59.532 07:14:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.532 07:14:23 -- common/autotest_common.sh@10 -- # set +x 00:16:59.532 07:14:23 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:16:59.532 07:14:23 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:16:59.532 07:14:23 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:16:59.532 07:14:23 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:16:59.532 07:14:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:59.532 07:14:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.532 07:14:23 -- common/autotest_common.sh@10 -- # set +x 00:16:59.533 ************************************ 00:16:59.533 START TEST env 00:16:59.533 ************************************ 00:16:59.533 07:14:23 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:16:59.533 * Looking for test storage... 00:16:59.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:16:59.533 07:14:23 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:59.533 07:14:23 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:59.533 07:14:23 env -- common/autotest_common.sh@1693 -- # lcov --version 00:16:59.791 07:14:23 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:59.791 07:14:23 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.791 07:14:23 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.791 07:14:23 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.791 07:14:23 env -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.791 07:14:23 env -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.791 07:14:23 env -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.791 07:14:23 env -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.791 07:14:23 env -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.791 07:14:23 env -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.791 07:14:23 env -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.791 07:14:23 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.791 07:14:23 env -- scripts/common.sh@344 -- # case "$op" in 00:16:59.791 07:14:23 env -- scripts/common.sh@345 -- # : 1 00:16:59.791 07:14:23 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.791 07:14:23 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.791 07:14:23 env -- scripts/common.sh@365 -- # decimal 1 00:16:59.791 07:14:23 env -- scripts/common.sh@353 -- # local d=1 00:16:59.791 07:14:23 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.791 07:14:23 env -- scripts/common.sh@355 -- # echo 1 00:16:59.791 07:14:23 env -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.791 07:14:23 env -- scripts/common.sh@366 -- # decimal 2 00:16:59.791 07:14:23 env -- scripts/common.sh@353 -- # local d=2 00:16:59.791 07:14:23 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.791 07:14:23 env -- scripts/common.sh@355 -- # echo 2 00:16:59.791 07:14:23 env -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.791 07:14:23 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.791 07:14:23 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.791 07:14:23 env -- scripts/common.sh@368 -- # return 0 00:16:59.791 07:14:23 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.791 07:14:23 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:59.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.791 --rc genhtml_branch_coverage=1 00:16:59.791 --rc genhtml_function_coverage=1 00:16:59.791 --rc genhtml_legend=1 00:16:59.791 --rc geninfo_all_blocks=1 00:16:59.791 --rc geninfo_unexecuted_blocks=1 00:16:59.791 00:16:59.791 ' 00:16:59.791 07:14:23 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:59.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.791 --rc genhtml_branch_coverage=1 00:16:59.791 --rc genhtml_function_coverage=1 00:16:59.791 --rc genhtml_legend=1 00:16:59.791 --rc geninfo_all_blocks=1 00:16:59.791 --rc geninfo_unexecuted_blocks=1 00:16:59.791 00:16:59.791 ' 00:16:59.791 07:14:23 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:59.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.791 --rc genhtml_branch_coverage=1 00:16:59.791 --rc genhtml_function_coverage=1 00:16:59.791 --rc genhtml_legend=1 00:16:59.791 --rc geninfo_all_blocks=1 00:16:59.791 --rc geninfo_unexecuted_blocks=1 00:16:59.791 00:16:59.791 ' 00:16:59.791 07:14:23 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:59.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.791 --rc genhtml_branch_coverage=1 00:16:59.791 --rc genhtml_function_coverage=1 00:16:59.791 --rc genhtml_legend=1 00:16:59.791 --rc geninfo_all_blocks=1 00:16:59.791 --rc geninfo_unexecuted_blocks=1 00:16:59.791 00:16:59.791 ' 00:16:59.791 07:14:23 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:16:59.791 07:14:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:59.791 07:14:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.791 07:14:23 env -- common/autotest_common.sh@10 -- # set +x 00:16:59.791 ************************************ 00:16:59.791 START TEST env_memory 00:16:59.791 ************************************ 00:16:59.791 07:14:23 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:16:59.791 00:16:59.791 00:16:59.791 CUnit - A unit testing framework for C - Version 2.1-3 00:16:59.791 http://cunit.sourceforge.net/ 00:16:59.791 00:16:59.791 00:16:59.791 Suite: memory 00:16:59.791 Test: alloc and free memory map ...[2024-11-20 07:14:23.807898] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:16:59.791 passed 00:16:59.791 Test: mem map translation ...[2024-11-20 07:14:23.831435] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:16:59.791 [2024-11-20 07:14:23.831466] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:16:59.791 [2024-11-20 07:14:23.831508] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:16:59.791 [2024-11-20 07:14:23.831515] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:16:59.791 passed 00:16:59.791 Test: mem map registration ...[2024-11-20 07:14:23.882604] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:16:59.791 [2024-11-20 07:14:23.882644] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:16:59.791 passed 00:16:59.791 Test: mem map adjacent registrations ...passed 00:16:59.791 00:16:59.791 Run Summary: Type Total Ran Passed Failed Inactive 00:16:59.791 suites 1 1 n/a 0 0 00:16:59.791 tests 4 4 4 0 0 00:16:59.791 asserts 152 152 152 0 n/a 00:16:59.791 00:16:59.791 Elapsed time = 0.169 seconds 00:16:59.791 00:16:59.791 real 0m0.178s 00:16:59.791 user 0m0.168s 00:16:59.791 sys 0m0.008s 00:16:59.791 07:14:23 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.791 07:14:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:16:59.791 ************************************ 00:16:59.791 END TEST env_memory 00:16:59.791 ************************************ 00:16:59.791 07:14:23 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:16:59.791 07:14:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:59.791 07:14:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.791 07:14:23 env -- common/autotest_common.sh@10 -- # set +x 00:16:59.791 ************************************ 00:16:59.791 START TEST env_vtophys 00:16:59.791 ************************************ 00:16:59.791 07:14:23 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:00.049 EAL: lib.eal log level changed from notice to debug 00:17:00.049 EAL: Detected lcore 0 as core 0 on socket 0 00:17:00.049 EAL: Detected lcore 1 as core 0 on socket 0 00:17:00.049 EAL: Detected lcore 2 as core 0 on socket 0 00:17:00.049 EAL: Detected lcore 3 as core 0 on socket 0 00:17:00.049 EAL: Detected lcore 4 as core 0 on socket 0 00:17:00.049 EAL: Detected lcore 5 as core 0 on socket 0 00:17:00.049 EAL: Detected lcore 6 as core 0 on socket 0 00:17:00.049 EAL: Detected lcore 7 as core 0 on socket 0 00:17:00.049 EAL: Detected lcore 8 as core 0 on socket 0 00:17:00.049 EAL: Detected lcore 9 as core 0 on socket 0 00:17:00.049 EAL: Maximum logical cores by configuration: 128 00:17:00.049 EAL: Detected CPU lcores: 10 00:17:00.049 EAL: Detected NUMA nodes: 1 00:17:00.049 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:17:00.049 EAL: Detected shared linkage of DPDK 00:17:00.049 EAL: No shared files mode enabled, IPC will be disabled 00:17:00.049 EAL: Selected IOVA mode 'PA' 00:17:00.049 EAL: Probing VFIO support... 00:17:00.049 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:00.049 EAL: VFIO modules not loaded, skipping VFIO support... 00:17:00.049 EAL: Ask a virtual area of 0x2e000 bytes 00:17:00.050 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:17:00.050 EAL: Setting up physically contiguous memory... 00:17:00.050 EAL: Setting maximum number of open files to 524288 00:17:00.050 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:17:00.050 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:17:00.050 EAL: Ask a virtual area of 0x61000 bytes 00:17:00.050 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:17:00.050 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:00.050 EAL: Ask a virtual area of 0x400000000 bytes 00:17:00.050 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:17:00.050 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:17:00.050 EAL: Ask a virtual area of 0x61000 bytes 00:17:00.050 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:17:00.050 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:00.050 EAL: Ask a virtual area of 0x400000000 bytes 00:17:00.050 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:17:00.050 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:17:00.050 EAL: Ask a virtual area of 0x61000 bytes 00:17:00.050 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:17:00.050 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:00.050 EAL: Ask a virtual area of 0x400000000 bytes 00:17:00.050 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:17:00.050 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:17:00.050 EAL: Ask a virtual area of 0x61000 bytes 00:17:00.050 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:17:00.050 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:00.050 EAL: Ask a virtual area of 0x400000000 bytes 00:17:00.050 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:17:00.050 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:17:00.050 EAL: Hugepages will be freed exactly as allocated. 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: TSC frequency is ~2600000 KHz 00:17:00.050 EAL: Main lcore 0 is ready (tid=7f99e531ca00;cpuset=[0]) 00:17:00.050 EAL: Trying to obtain current memory policy. 00:17:00.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:00.050 EAL: Restoring previous memory policy: 0 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was expanded by 2MB 00:17:00.050 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:00.050 EAL: No PCI address specified using 'addr=' in: bus=pci 00:17:00.050 EAL: Mem event callback 'spdk:(nil)' registered 00:17:00.050 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:17:00.050 00:17:00.050 00:17:00.050 CUnit - A unit testing framework for C - Version 2.1-3 00:17:00.050 http://cunit.sourceforge.net/ 00:17:00.050 00:17:00.050 00:17:00.050 Suite: components_suite 00:17:00.050 Test: vtophys_malloc_test ...passed 00:17:00.050 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:17:00.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:00.050 EAL: Restoring previous memory policy: 4 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was expanded by 4MB 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was shrunk by 4MB 00:17:00.050 EAL: Trying to obtain current memory policy. 00:17:00.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:00.050 EAL: Restoring previous memory policy: 4 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was expanded by 6MB 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was shrunk by 6MB 00:17:00.050 EAL: Trying to obtain current memory policy. 00:17:00.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:00.050 EAL: Restoring previous memory policy: 4 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was expanded by 10MB 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was shrunk by 10MB 00:17:00.050 EAL: Trying to obtain current memory policy. 00:17:00.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:00.050 EAL: Restoring previous memory policy: 4 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was expanded by 18MB 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was shrunk by 18MB 00:17:00.050 EAL: Trying to obtain current memory policy. 00:17:00.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:00.050 EAL: Restoring previous memory policy: 4 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was expanded by 34MB 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was shrunk by 34MB 00:17:00.050 EAL: Trying to obtain current memory policy. 00:17:00.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:00.050 EAL: Restoring previous memory policy: 4 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was expanded by 66MB 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was shrunk by 66MB 00:17:00.050 EAL: Trying to obtain current memory policy. 00:17:00.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:00.050 EAL: Restoring previous memory policy: 4 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was expanded by 130MB 00:17:00.050 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.050 EAL: request: mp_malloc_sync 00:17:00.050 EAL: No shared files mode enabled, IPC is disabled 00:17:00.050 EAL: Heap on socket 0 was shrunk by 130MB 00:17:00.050 EAL: Trying to obtain current memory policy. 00:17:00.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:00.308 EAL: Restoring previous memory policy: 4 00:17:00.308 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.308 EAL: request: mp_malloc_sync 00:17:00.308 EAL: No shared files mode enabled, IPC is disabled 00:17:00.308 EAL: Heap on socket 0 was expanded by 258MB 00:17:00.308 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.308 EAL: request: mp_malloc_sync 00:17:00.308 EAL: No shared files mode enabled, IPC is disabled 00:17:00.308 EAL: Heap on socket 0 was shrunk by 258MB 00:17:00.308 EAL: Trying to obtain current memory policy. 00:17:00.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:00.308 EAL: Restoring previous memory policy: 4 00:17:00.308 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.308 EAL: request: mp_malloc_sync 00:17:00.308 EAL: No shared files mode enabled, IPC is disabled 00:17:00.308 EAL: Heap on socket 0 was expanded by 514MB 00:17:00.308 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.308 EAL: request: mp_malloc_sync 00:17:00.308 EAL: No shared files mode enabled, IPC is disabled 00:17:00.308 EAL: Heap on socket 0 was shrunk by 514MB 00:17:00.308 EAL: Trying to obtain current memory policy. 00:17:00.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:00.565 EAL: Restoring previous memory policy: 4 00:17:00.565 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.565 EAL: request: mp_malloc_sync 00:17:00.565 EAL: No shared files mode enabled, IPC is disabled 00:17:00.565 EAL: Heap on socket 0 was expanded by 1026MB 00:17:00.565 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.823 passed 00:17:00.823 00:17:00.823 Run Summary: Type Total Ran Passed Failed Inactive 00:17:00.823 suites 1 1 n/a 0 0 00:17:00.823 tests 2 2 2 0 0 00:17:00.823 asserts 5708 5708 5708 0 n/a 00:17:00.823 00:17:00.823 Elapsed time = 0.659 seconds 00:17:00.824 EAL: request: mp_malloc_sync 00:17:00.824 EAL: No shared files mode enabled, IPC is disabled 00:17:00.824 EAL: Heap on socket 0 was shrunk by 1026MB 00:17:00.824 EAL: Calling mem event callback 'spdk:(nil)' 00:17:00.824 EAL: request: mp_malloc_sync 00:17:00.824 EAL: No shared files mode enabled, IPC is disabled 00:17:00.824 EAL: Heap on socket 0 was shrunk by 2MB 00:17:00.824 EAL: No shared files mode enabled, IPC is disabled 00:17:00.824 EAL: No shared files mode enabled, IPC is disabled 00:17:00.824 EAL: No shared files mode enabled, IPC is disabled 00:17:00.824 00:17:00.824 real 0m0.844s 00:17:00.824 user 0m0.415s 00:17:00.824 sys 0m0.302s 00:17:00.824 07:14:24 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.824 07:14:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:17:00.824 ************************************ 00:17:00.824 END TEST env_vtophys 00:17:00.824 ************************************ 00:17:00.824 07:14:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:00.824 07:14:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:00.824 07:14:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.824 07:14:24 env -- common/autotest_common.sh@10 -- # set +x 00:17:00.824 ************************************ 00:17:00.824 START TEST env_pci 00:17:00.824 ************************************ 00:17:00.824 07:14:24 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:00.824 00:17:00.824 00:17:00.824 CUnit - A unit testing framework for C - Version 2.1-3 00:17:00.824 http://cunit.sourceforge.net/ 00:17:00.824 00:17:00.824 00:17:00.824 Suite: pci 00:17:00.824 Test: pci_hook ...[2024-11-20 07:14:24.882837] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55977 has claimed it 00:17:00.824 passed 00:17:00.824 00:17:00.824 Run Summary: Type Total Ran Passed Failed Inactive 00:17:00.824 suites 1 1 n/a 0 0 00:17:00.824 tests 1 1 1 0 0 00:17:00.824 asserts 25 25 25 0 n/a 00:17:00.824 00:17:00.824 Elapsed time = 0.001 seconds 00:17:00.824 EAL: Cannot find device (10000:00:01.0) 00:17:00.824 EAL: Failed to attach device on primary process 00:17:00.824 00:17:00.824 real 0m0.016s 00:17:00.824 user 0m0.005s 00:17:00.824 sys 0m0.011s 00:17:00.824 07:14:24 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.824 07:14:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:17:00.824 ************************************ 00:17:00.824 END TEST env_pci 00:17:00.824 ************************************ 00:17:00.824 07:14:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:17:00.824 07:14:24 env -- env/env.sh@15 -- # uname 00:17:00.824 07:14:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:17:00.824 07:14:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:17:00.824 07:14:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:00.824 07:14:24 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:00.824 07:14:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.824 07:14:24 env -- common/autotest_common.sh@10 -- # set +x 00:17:00.824 ************************************ 00:17:00.824 START TEST env_dpdk_post_init 00:17:00.824 ************************************ 00:17:00.824 07:14:24 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:00.824 EAL: Detected CPU lcores: 10 00:17:00.824 EAL: Detected NUMA nodes: 1 00:17:00.824 EAL: Detected shared linkage of DPDK 00:17:00.824 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:00.824 EAL: Selected IOVA mode 'PA' 00:17:01.082 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:01.082 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:17:01.082 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:17:01.082 Starting DPDK initialization... 00:17:01.082 Starting SPDK post initialization... 00:17:01.082 SPDK NVMe probe 00:17:01.082 Attaching to 0000:00:10.0 00:17:01.082 Attaching to 0000:00:11.0 00:17:01.082 Attached to 0000:00:10.0 00:17:01.082 Attached to 0000:00:11.0 00:17:01.082 Cleaning up... 00:17:01.082 00:17:01.082 real 0m0.171s 00:17:01.082 user 0m0.043s 00:17:01.082 sys 0m0.028s 00:17:01.082 07:14:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.082 ************************************ 00:17:01.082 END TEST env_dpdk_post_init 00:17:01.082 ************************************ 00:17:01.082 07:14:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.082 07:14:25 env -- env/env.sh@26 -- # uname 00:17:01.082 07:14:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:17:01.082 07:14:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:01.082 07:14:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:01.082 07:14:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.082 07:14:25 env -- common/autotest_common.sh@10 -- # set +x 00:17:01.082 ************************************ 00:17:01.082 START TEST env_mem_callbacks 00:17:01.082 ************************************ 00:17:01.082 07:14:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:01.082 EAL: Detected CPU lcores: 10 00:17:01.082 EAL: Detected NUMA nodes: 1 00:17:01.082 EAL: Detected shared linkage of DPDK 00:17:01.082 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:01.082 EAL: Selected IOVA mode 'PA' 00:17:01.082 00:17:01.082 00:17:01.082 CUnit - A unit testing framework for C - Version 2.1-3 00:17:01.082 http://cunit.sourceforge.net/ 00:17:01.082 00:17:01.082 00:17:01.082 Suite: memory 00:17:01.082 Test: test ... 00:17:01.082 register 0x200000200000 2097152 00:17:01.082 malloc 3145728 00:17:01.082 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:01.082 register 0x200000400000 4194304 00:17:01.082 buf 0x200000500000 len 3145728 PASSED 00:17:01.082 malloc 64 00:17:01.082 buf 0x2000004fff40 len 64 PASSED 00:17:01.082 malloc 4194304 00:17:01.082 register 0x200000800000 6291456 00:17:01.082 buf 0x200000a00000 len 4194304 PASSED 00:17:01.082 free 0x200000500000 3145728 00:17:01.082 free 0x2000004fff40 64 00:17:01.082 unregister 0x200000400000 4194304 PASSED 00:17:01.082 free 0x200000a00000 4194304 00:17:01.082 unregister 0x200000800000 6291456 PASSED 00:17:01.082 malloc 8388608 00:17:01.082 register 0x200000400000 10485760 00:17:01.082 buf 0x200000600000 len 8388608 PASSED 00:17:01.082 free 0x200000600000 8388608 00:17:01.082 unregister 0x200000400000 10485760 PASSED 00:17:01.082 passed 00:17:01.082 00:17:01.082 Run Summary: Type Total Ran Passed Failed Inactive 00:17:01.082 suites 1 1 n/a 0 0 00:17:01.082 tests 1 1 1 0 0 00:17:01.082 asserts 15 15 15 0 n/a 00:17:01.082 00:17:01.082 Elapsed time = 0.008 seconds 00:17:01.082 00:17:01.082 real 0m0.132s 00:17:01.082 user 0m0.013s 00:17:01.082 sys 0m0.018s 00:17:01.082 07:14:25 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.082 07:14:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:17:01.082 ************************************ 00:17:01.082 END TEST env_mem_callbacks 00:17:01.082 ************************************ 00:17:01.340 00:17:01.340 real 0m1.699s 00:17:01.340 user 0m0.811s 00:17:01.340 sys 0m0.558s 00:17:01.340 07:14:25 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.340 07:14:25 env -- common/autotest_common.sh@10 -- # set +x 00:17:01.340 ************************************ 00:17:01.340 END TEST env 00:17:01.340 ************************************ 00:17:01.340 07:14:25 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:01.340 07:14:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:01.340 07:14:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.340 07:14:25 -- common/autotest_common.sh@10 -- # set +x 00:17:01.340 ************************************ 00:17:01.340 START TEST rpc 00:17:01.340 ************************************ 00:17:01.340 07:14:25 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:01.340 * Looking for test storage... 00:17:01.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:01.340 07:14:25 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:01.340 07:14:25 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:01.340 07:14:25 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:01.340 07:14:25 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:01.340 07:14:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.340 07:14:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.340 07:14:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.340 07:14:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.340 07:14:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.340 07:14:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.340 07:14:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.340 07:14:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.340 07:14:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.340 07:14:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.340 07:14:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.340 07:14:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:01.340 07:14:25 rpc -- scripts/common.sh@345 -- # : 1 00:17:01.340 07:14:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.340 07:14:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.340 07:14:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:17:01.340 07:14:25 rpc -- scripts/common.sh@353 -- # local d=1 00:17:01.340 07:14:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.340 07:14:25 rpc -- scripts/common.sh@355 -- # echo 1 00:17:01.340 07:14:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.340 07:14:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:17:01.340 07:14:25 rpc -- scripts/common.sh@353 -- # local d=2 00:17:01.340 07:14:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.340 07:14:25 rpc -- scripts/common.sh@355 -- # echo 2 00:17:01.340 07:14:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.340 07:14:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.340 07:14:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.340 07:14:25 rpc -- scripts/common.sh@368 -- # return 0 00:17:01.340 07:14:25 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.340 07:14:25 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:01.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.340 --rc genhtml_branch_coverage=1 00:17:01.340 --rc genhtml_function_coverage=1 00:17:01.340 --rc genhtml_legend=1 00:17:01.340 --rc geninfo_all_blocks=1 00:17:01.340 --rc geninfo_unexecuted_blocks=1 00:17:01.340 00:17:01.340 ' 00:17:01.340 07:14:25 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:01.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.340 --rc genhtml_branch_coverage=1 00:17:01.340 --rc genhtml_function_coverage=1 00:17:01.340 --rc genhtml_legend=1 00:17:01.340 --rc geninfo_all_blocks=1 00:17:01.340 --rc geninfo_unexecuted_blocks=1 00:17:01.340 00:17:01.340 ' 00:17:01.340 07:14:25 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:01.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.340 --rc genhtml_branch_coverage=1 00:17:01.340 --rc genhtml_function_coverage=1 00:17:01.340 --rc genhtml_legend=1 00:17:01.340 --rc geninfo_all_blocks=1 00:17:01.340 --rc geninfo_unexecuted_blocks=1 00:17:01.340 00:17:01.340 ' 00:17:01.340 07:14:25 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:01.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.340 --rc genhtml_branch_coverage=1 00:17:01.340 --rc genhtml_function_coverage=1 00:17:01.340 --rc genhtml_legend=1 00:17:01.340 --rc geninfo_all_blocks=1 00:17:01.340 --rc geninfo_unexecuted_blocks=1 00:17:01.340 00:17:01.340 ' 00:17:01.340 07:14:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56089 00:17:01.340 07:14:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:01.340 07:14:25 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:17:01.340 07:14:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56089 00:17:01.341 07:14:25 rpc -- common/autotest_common.sh@835 -- # '[' -z 56089 ']' 00:17:01.341 07:14:25 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.341 07:14:25 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.341 07:14:25 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.341 07:14:25 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.341 07:14:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.341 [2024-11-20 07:14:25.530921] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:01.341 [2024-11-20 07:14:25.530981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56089 ] 00:17:01.598 [2024-11-20 07:14:25.670308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.598 [2024-11-20 07:14:25.708613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:17:01.598 [2024-11-20 07:14:25.708652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56089' to capture a snapshot of events at runtime. 00:17:01.598 [2024-11-20 07:14:25.708659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.598 [2024-11-20 07:14:25.708664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.598 [2024-11-20 07:14:25.708669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56089 for offline analysis/debug. 00:17:01.598 [2024-11-20 07:14:25.708937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.598 [2024-11-20 07:14:25.753649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:02.532 07:14:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.532 07:14:26 rpc -- common/autotest_common.sh@868 -- # return 0 00:17:02.532 07:14:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:02.532 07:14:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:02.532 07:14:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:17:02.532 07:14:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:17:02.532 07:14:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:02.532 07:14:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.532 07:14:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.532 ************************************ 00:17:02.532 START TEST rpc_integrity 00:17:02.532 ************************************ 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:02.532 { 00:17:02.532 "name": "Malloc0", 00:17:02.532 "aliases": [ 00:17:02.532 "0c34c794-d532-4c07-bbe4-ee6c5f69d49a" 00:17:02.532 ], 00:17:02.532 "product_name": "Malloc disk", 00:17:02.532 "block_size": 512, 00:17:02.532 "num_blocks": 16384, 00:17:02.532 "uuid": "0c34c794-d532-4c07-bbe4-ee6c5f69d49a", 00:17:02.532 "assigned_rate_limits": { 00:17:02.532 "rw_ios_per_sec": 0, 00:17:02.532 "rw_mbytes_per_sec": 0, 00:17:02.532 "r_mbytes_per_sec": 0, 00:17:02.532 "w_mbytes_per_sec": 0 00:17:02.532 }, 00:17:02.532 "claimed": false, 00:17:02.532 "zoned": false, 00:17:02.532 "supported_io_types": { 00:17:02.532 "read": true, 00:17:02.532 "write": true, 00:17:02.532 "unmap": true, 00:17:02.532 "flush": true, 00:17:02.532 "reset": true, 00:17:02.532 "nvme_admin": false, 00:17:02.532 "nvme_io": false, 00:17:02.532 "nvme_io_md": false, 00:17:02.532 "write_zeroes": true, 00:17:02.532 "zcopy": true, 00:17:02.532 "get_zone_info": false, 00:17:02.532 "zone_management": false, 00:17:02.532 "zone_append": false, 00:17:02.532 "compare": false, 00:17:02.532 "compare_and_write": false, 00:17:02.532 "abort": true, 00:17:02.532 "seek_hole": false, 00:17:02.532 "seek_data": false, 00:17:02.532 "copy": true, 00:17:02.532 "nvme_iov_md": false 00:17:02.532 }, 00:17:02.532 "memory_domains": [ 00:17:02.532 { 00:17:02.532 "dma_device_id": "system", 00:17:02.532 "dma_device_type": 1 00:17:02.532 }, 00:17:02.532 { 00:17:02.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.532 "dma_device_type": 2 00:17:02.532 } 00:17:02.532 ], 00:17:02.532 "driver_specific": {} 00:17:02.532 } 00:17:02.532 ]' 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:02.532 [2024-11-20 07:14:26.513342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:17:02.532 [2024-11-20 07:14:26.513379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.532 [2024-11-20 07:14:26.513391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f8f20 00:17:02.532 [2024-11-20 07:14:26.513396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.532 [2024-11-20 07:14:26.514725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.532 [2024-11-20 07:14:26.514753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:02.532 Passthru0 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:02.532 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.532 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:02.532 { 00:17:02.532 "name": "Malloc0", 00:17:02.532 "aliases": [ 00:17:02.532 "0c34c794-d532-4c07-bbe4-ee6c5f69d49a" 00:17:02.532 ], 00:17:02.532 "product_name": "Malloc disk", 00:17:02.532 "block_size": 512, 00:17:02.532 "num_blocks": 16384, 00:17:02.532 "uuid": "0c34c794-d532-4c07-bbe4-ee6c5f69d49a", 00:17:02.532 "assigned_rate_limits": { 00:17:02.532 "rw_ios_per_sec": 0, 00:17:02.532 "rw_mbytes_per_sec": 0, 00:17:02.532 "r_mbytes_per_sec": 0, 00:17:02.532 "w_mbytes_per_sec": 0 00:17:02.532 }, 00:17:02.532 "claimed": true, 00:17:02.532 "claim_type": "exclusive_write", 00:17:02.532 "zoned": false, 00:17:02.532 "supported_io_types": { 00:17:02.532 "read": true, 00:17:02.532 "write": true, 00:17:02.532 "unmap": true, 00:17:02.532 "flush": true, 00:17:02.532 "reset": true, 00:17:02.532 "nvme_admin": false, 00:17:02.532 "nvme_io": false, 00:17:02.532 "nvme_io_md": false, 00:17:02.532 "write_zeroes": true, 00:17:02.532 "zcopy": true, 00:17:02.532 "get_zone_info": false, 00:17:02.532 "zone_management": false, 00:17:02.532 "zone_append": false, 00:17:02.532 "compare": false, 00:17:02.532 "compare_and_write": false, 00:17:02.532 "abort": true, 00:17:02.532 "seek_hole": false, 00:17:02.532 "seek_data": false, 00:17:02.532 "copy": true, 00:17:02.532 "nvme_iov_md": false 00:17:02.532 }, 00:17:02.532 "memory_domains": [ 00:17:02.532 { 00:17:02.532 "dma_device_id": "system", 00:17:02.532 "dma_device_type": 1 00:17:02.532 }, 00:17:02.532 { 00:17:02.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.532 "dma_device_type": 2 00:17:02.533 } 00:17:02.533 ], 00:17:02.533 "driver_specific": {} 00:17:02.533 }, 00:17:02.533 { 00:17:02.533 "name": "Passthru0", 00:17:02.533 "aliases": [ 00:17:02.533 "f548b771-ee89-5847-8f79-1981027a15b6" 00:17:02.533 ], 00:17:02.533 "product_name": "passthru", 00:17:02.533 "block_size": 512, 00:17:02.533 "num_blocks": 16384, 00:17:02.533 "uuid": "f548b771-ee89-5847-8f79-1981027a15b6", 00:17:02.533 "assigned_rate_limits": { 00:17:02.533 "rw_ios_per_sec": 0, 00:17:02.533 "rw_mbytes_per_sec": 0, 00:17:02.533 "r_mbytes_per_sec": 0, 00:17:02.533 "w_mbytes_per_sec": 0 00:17:02.533 }, 00:17:02.533 "claimed": false, 00:17:02.533 "zoned": false, 00:17:02.533 "supported_io_types": { 00:17:02.533 "read": true, 00:17:02.533 "write": true, 00:17:02.533 "unmap": true, 00:17:02.533 "flush": true, 00:17:02.533 "reset": true, 00:17:02.533 "nvme_admin": false, 00:17:02.533 "nvme_io": false, 00:17:02.533 "nvme_io_md": false, 00:17:02.533 "write_zeroes": true, 00:17:02.533 "zcopy": true, 00:17:02.533 "get_zone_info": false, 00:17:02.533 "zone_management": false, 00:17:02.533 "zone_append": false, 00:17:02.533 "compare": false, 00:17:02.533 "compare_and_write": false, 00:17:02.533 "abort": true, 00:17:02.533 "seek_hole": false, 00:17:02.533 "seek_data": false, 00:17:02.533 "copy": true, 00:17:02.533 "nvme_iov_md": false 00:17:02.533 }, 00:17:02.533 "memory_domains": [ 00:17:02.533 { 00:17:02.533 "dma_device_id": "system", 00:17:02.533 "dma_device_type": 1 00:17:02.533 }, 00:17:02.533 { 00:17:02.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.533 "dma_device_type": 2 00:17:02.533 } 00:17:02.533 ], 00:17:02.533 "driver_specific": { 00:17:02.533 "passthru": { 00:17:02.533 "name": "Passthru0", 00:17:02.533 "base_bdev_name": "Malloc0" 00:17:02.533 } 00:17:02.533 } 00:17:02.533 } 00:17:02.533 ]' 00:17:02.533 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:02.533 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:02.533 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:02.533 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.533 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:02.533 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.533 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:02.533 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.533 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:02.533 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.533 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:02.533 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.533 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:02.533 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.533 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:02.533 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:02.533 07:14:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:02.533 00:17:02.533 real 0m0.224s 00:17:02.533 user 0m0.125s 00:17:02.533 sys 0m0.033s 00:17:02.533 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.533 07:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:02.533 ************************************ 00:17:02.533 END TEST rpc_integrity 00:17:02.533 ************************************ 00:17:02.533 07:14:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:17:02.533 07:14:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:02.533 07:14:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.533 07:14:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.533 ************************************ 00:17:02.533 START TEST rpc_plugins 00:17:02.533 ************************************ 00:17:02.533 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:17:02.533 07:14:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:17:02.533 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.533 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:02.533 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.533 07:14:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:17:02.533 07:14:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:17:02.533 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.533 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:02.533 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.533 07:14:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:17:02.533 { 00:17:02.533 "name": "Malloc1", 00:17:02.533 "aliases": [ 00:17:02.533 "91eae79a-6334-45f7-afd6-b94862d4b226" 00:17:02.533 ], 00:17:02.533 "product_name": "Malloc disk", 00:17:02.533 "block_size": 4096, 00:17:02.533 "num_blocks": 256, 00:17:02.533 "uuid": "91eae79a-6334-45f7-afd6-b94862d4b226", 00:17:02.533 "assigned_rate_limits": { 00:17:02.533 "rw_ios_per_sec": 0, 00:17:02.533 "rw_mbytes_per_sec": 0, 00:17:02.533 "r_mbytes_per_sec": 0, 00:17:02.533 "w_mbytes_per_sec": 0 00:17:02.533 }, 00:17:02.533 "claimed": false, 00:17:02.533 "zoned": false, 00:17:02.533 "supported_io_types": { 00:17:02.533 "read": true, 00:17:02.533 "write": true, 00:17:02.533 "unmap": true, 00:17:02.533 "flush": true, 00:17:02.533 "reset": true, 00:17:02.533 "nvme_admin": false, 00:17:02.533 "nvme_io": false, 00:17:02.533 "nvme_io_md": false, 00:17:02.533 "write_zeroes": true, 00:17:02.533 "zcopy": true, 00:17:02.533 "get_zone_info": false, 00:17:02.533 "zone_management": false, 00:17:02.533 "zone_append": false, 00:17:02.533 "compare": false, 00:17:02.533 "compare_and_write": false, 00:17:02.533 "abort": true, 00:17:02.533 "seek_hole": false, 00:17:02.533 "seek_data": false, 00:17:02.533 "copy": true, 00:17:02.533 "nvme_iov_md": false 00:17:02.533 }, 00:17:02.533 "memory_domains": [ 00:17:02.533 { 00:17:02.533 "dma_device_id": "system", 00:17:02.533 "dma_device_type": 1 00:17:02.533 }, 00:17:02.533 { 00:17:02.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.533 "dma_device_type": 2 00:17:02.533 } 00:17:02.533 ], 00:17:02.533 "driver_specific": {} 00:17:02.533 } 00:17:02.533 ]' 00:17:02.533 07:14:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:17:02.533 07:14:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:17:02.533 07:14:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:17:02.533 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.533 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:02.791 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.791 07:14:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:17:02.791 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.791 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:02.791 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.791 07:14:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:17:02.791 07:14:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:17:02.791 07:14:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:17:02.791 00:17:02.791 real 0m0.109s 00:17:02.791 user 0m0.057s 00:17:02.791 sys 0m0.017s 00:17:02.791 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.791 07:14:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:02.791 ************************************ 00:17:02.791 END TEST rpc_plugins 00:17:02.791 ************************************ 00:17:02.791 07:14:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:17:02.791 07:14:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:02.791 07:14:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.791 07:14:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.791 ************************************ 00:17:02.791 START TEST rpc_trace_cmd_test 00:17:02.791 ************************************ 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:17:02.791 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56089", 00:17:02.791 "tpoint_group_mask": "0x8", 00:17:02.791 "iscsi_conn": { 00:17:02.791 "mask": "0x2", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "scsi": { 00:17:02.791 "mask": "0x4", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "bdev": { 00:17:02.791 "mask": "0x8", 00:17:02.791 "tpoint_mask": "0xffffffffffffffff" 00:17:02.791 }, 00:17:02.791 "nvmf_rdma": { 00:17:02.791 "mask": "0x10", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "nvmf_tcp": { 00:17:02.791 "mask": "0x20", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "ftl": { 00:17:02.791 "mask": "0x40", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "blobfs": { 00:17:02.791 "mask": "0x80", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "dsa": { 00:17:02.791 "mask": "0x200", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "thread": { 00:17:02.791 "mask": "0x400", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "nvme_pcie": { 00:17:02.791 "mask": "0x800", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "iaa": { 00:17:02.791 "mask": "0x1000", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "nvme_tcp": { 00:17:02.791 "mask": "0x2000", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "bdev_nvme": { 00:17:02.791 "mask": "0x4000", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "sock": { 00:17:02.791 "mask": "0x8000", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "blob": { 00:17:02.791 "mask": "0x10000", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "bdev_raid": { 00:17:02.791 "mask": "0x20000", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 }, 00:17:02.791 "scheduler": { 00:17:02.791 "mask": "0x40000", 00:17:02.791 "tpoint_mask": "0x0" 00:17:02.791 } 00:17:02.791 }' 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:17:02.791 00:17:02.791 real 0m0.176s 00:17:02.791 user 0m0.141s 00:17:02.791 sys 0m0.026s 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.791 ************************************ 00:17:02.791 END TEST rpc_trace_cmd_test 00:17:02.791 ************************************ 00:17:02.791 07:14:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.050 07:14:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:17:03.050 07:14:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:17:03.050 07:14:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:17:03.050 07:14:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:03.050 07:14:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.050 07:14:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.050 ************************************ 00:17:03.050 START TEST rpc_daemon_integrity 00:17:03.050 ************************************ 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:03.050 { 00:17:03.050 "name": "Malloc2", 00:17:03.050 "aliases": [ 00:17:03.050 "2772aeb0-804d-43ed-9d7c-31d11fb014bb" 00:17:03.050 ], 00:17:03.050 "product_name": "Malloc disk", 00:17:03.050 "block_size": 512, 00:17:03.050 "num_blocks": 16384, 00:17:03.050 "uuid": "2772aeb0-804d-43ed-9d7c-31d11fb014bb", 00:17:03.050 "assigned_rate_limits": { 00:17:03.050 "rw_ios_per_sec": 0, 00:17:03.050 "rw_mbytes_per_sec": 0, 00:17:03.050 "r_mbytes_per_sec": 0, 00:17:03.050 "w_mbytes_per_sec": 0 00:17:03.050 }, 00:17:03.050 "claimed": false, 00:17:03.050 "zoned": false, 00:17:03.050 "supported_io_types": { 00:17:03.050 "read": true, 00:17:03.050 "write": true, 00:17:03.050 "unmap": true, 00:17:03.050 "flush": true, 00:17:03.050 "reset": true, 00:17:03.050 "nvme_admin": false, 00:17:03.050 "nvme_io": false, 00:17:03.050 "nvme_io_md": false, 00:17:03.050 "write_zeroes": true, 00:17:03.050 "zcopy": true, 00:17:03.050 "get_zone_info": false, 00:17:03.050 "zone_management": false, 00:17:03.050 "zone_append": false, 00:17:03.050 "compare": false, 00:17:03.050 "compare_and_write": false, 00:17:03.050 "abort": true, 00:17:03.050 "seek_hole": false, 00:17:03.050 "seek_data": false, 00:17:03.050 "copy": true, 00:17:03.050 "nvme_iov_md": false 00:17:03.050 }, 00:17:03.050 "memory_domains": [ 00:17:03.050 { 00:17:03.050 "dma_device_id": "system", 00:17:03.050 "dma_device_type": 1 00:17:03.050 }, 00:17:03.050 { 00:17:03.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.050 "dma_device_type": 2 00:17:03.050 } 00:17:03.050 ], 00:17:03.050 "driver_specific": {} 00:17:03.050 } 00:17:03.050 ]' 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:03.050 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:03.051 [2024-11-20 07:14:27.137545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:17:03.051 [2024-11-20 07:14:27.137582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.051 [2024-11-20 07:14:27.137595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xadf8e0 00:17:03.051 [2024-11-20 07:14:27.137601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.051 [2024-11-20 07:14:27.138906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.051 [2024-11-20 07:14:27.138934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:03.051 Passthru0 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:03.051 { 00:17:03.051 "name": "Malloc2", 00:17:03.051 "aliases": [ 00:17:03.051 "2772aeb0-804d-43ed-9d7c-31d11fb014bb" 00:17:03.051 ], 00:17:03.051 "product_name": "Malloc disk", 00:17:03.051 "block_size": 512, 00:17:03.051 "num_blocks": 16384, 00:17:03.051 "uuid": "2772aeb0-804d-43ed-9d7c-31d11fb014bb", 00:17:03.051 "assigned_rate_limits": { 00:17:03.051 "rw_ios_per_sec": 0, 00:17:03.051 "rw_mbytes_per_sec": 0, 00:17:03.051 "r_mbytes_per_sec": 0, 00:17:03.051 "w_mbytes_per_sec": 0 00:17:03.051 }, 00:17:03.051 "claimed": true, 00:17:03.051 "claim_type": "exclusive_write", 00:17:03.051 "zoned": false, 00:17:03.051 "supported_io_types": { 00:17:03.051 "read": true, 00:17:03.051 "write": true, 00:17:03.051 "unmap": true, 00:17:03.051 "flush": true, 00:17:03.051 "reset": true, 00:17:03.051 "nvme_admin": false, 00:17:03.051 "nvme_io": false, 00:17:03.051 "nvme_io_md": false, 00:17:03.051 "write_zeroes": true, 00:17:03.051 "zcopy": true, 00:17:03.051 "get_zone_info": false, 00:17:03.051 "zone_management": false, 00:17:03.051 "zone_append": false, 00:17:03.051 "compare": false, 00:17:03.051 "compare_and_write": false, 00:17:03.051 "abort": true, 00:17:03.051 "seek_hole": false, 00:17:03.051 "seek_data": false, 00:17:03.051 "copy": true, 00:17:03.051 "nvme_iov_md": false 00:17:03.051 }, 00:17:03.051 "memory_domains": [ 00:17:03.051 { 00:17:03.051 "dma_device_id": "system", 00:17:03.051 "dma_device_type": 1 00:17:03.051 }, 00:17:03.051 { 00:17:03.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.051 "dma_device_type": 2 00:17:03.051 } 00:17:03.051 ], 00:17:03.051 "driver_specific": {} 00:17:03.051 }, 00:17:03.051 { 00:17:03.051 "name": "Passthru0", 00:17:03.051 "aliases": [ 00:17:03.051 "acede13f-00e7-5d6f-a38d-d88b9ca6d9a5" 00:17:03.051 ], 00:17:03.051 "product_name": "passthru", 00:17:03.051 "block_size": 512, 00:17:03.051 "num_blocks": 16384, 00:17:03.051 "uuid": "acede13f-00e7-5d6f-a38d-d88b9ca6d9a5", 00:17:03.051 "assigned_rate_limits": { 00:17:03.051 "rw_ios_per_sec": 0, 00:17:03.051 "rw_mbytes_per_sec": 0, 00:17:03.051 "r_mbytes_per_sec": 0, 00:17:03.051 "w_mbytes_per_sec": 0 00:17:03.051 }, 00:17:03.051 "claimed": false, 00:17:03.051 "zoned": false, 00:17:03.051 "supported_io_types": { 00:17:03.051 "read": true, 00:17:03.051 "write": true, 00:17:03.051 "unmap": true, 00:17:03.051 "flush": true, 00:17:03.051 "reset": true, 00:17:03.051 "nvme_admin": false, 00:17:03.051 "nvme_io": false, 00:17:03.051 "nvme_io_md": false, 00:17:03.051 "write_zeroes": true, 00:17:03.051 "zcopy": true, 00:17:03.051 "get_zone_info": false, 00:17:03.051 "zone_management": false, 00:17:03.051 "zone_append": false, 00:17:03.051 "compare": false, 00:17:03.051 "compare_and_write": false, 00:17:03.051 "abort": true, 00:17:03.051 "seek_hole": false, 00:17:03.051 "seek_data": false, 00:17:03.051 "copy": true, 00:17:03.051 "nvme_iov_md": false 00:17:03.051 }, 00:17:03.051 "memory_domains": [ 00:17:03.051 { 00:17:03.051 "dma_device_id": "system", 00:17:03.051 "dma_device_type": 1 00:17:03.051 }, 00:17:03.051 { 00:17:03.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.051 "dma_device_type": 2 00:17:03.051 } 00:17:03.051 ], 00:17:03.051 "driver_specific": { 00:17:03.051 "passthru": { 00:17:03.051 "name": "Passthru0", 00:17:03.051 "base_bdev_name": "Malloc2" 00:17:03.051 } 00:17:03.051 } 00:17:03.051 } 00:17:03.051 ]' 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:03.051 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:03.309 07:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:03.309 00:17:03.309 real 0m0.227s 00:17:03.309 user 0m0.122s 00:17:03.309 sys 0m0.040s 00:17:03.309 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.309 ************************************ 00:17:03.309 END TEST rpc_daemon_integrity 00:17:03.309 ************************************ 00:17:03.309 07:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:03.309 07:14:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:03.309 07:14:27 rpc -- rpc/rpc.sh@84 -- # killprocess 56089 00:17:03.309 07:14:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 56089 ']' 00:17:03.309 07:14:27 rpc -- common/autotest_common.sh@958 -- # kill -0 56089 00:17:03.309 07:14:27 rpc -- common/autotest_common.sh@959 -- # uname 00:17:03.309 07:14:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.309 07:14:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56089 00:17:03.309 07:14:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.309 07:14:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.309 killing process with pid 56089 00:17:03.309 07:14:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56089' 00:17:03.309 07:14:27 rpc -- common/autotest_common.sh@973 -- # kill 56089 00:17:03.309 07:14:27 rpc -- common/autotest_common.sh@978 -- # wait 56089 00:17:03.309 00:17:03.309 real 0m2.166s 00:17:03.309 user 0m2.681s 00:17:03.309 sys 0m0.501s 00:17:03.567 07:14:27 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.567 07:14:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.567 ************************************ 00:17:03.567 END TEST rpc 00:17:03.567 ************************************ 00:17:03.567 07:14:27 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:03.567 07:14:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:03.567 07:14:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.567 07:14:27 -- common/autotest_common.sh@10 -- # set +x 00:17:03.567 ************************************ 00:17:03.567 START TEST skip_rpc 00:17:03.567 ************************************ 00:17:03.567 07:14:27 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:03.567 * Looking for test storage... 00:17:03.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:03.567 07:14:27 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:03.567 07:14:27 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:03.567 07:14:27 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:03.567 07:14:27 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.567 07:14:27 skip_rpc -- scripts/common.sh@368 -- # return 0 00:17:03.568 07:14:27 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.568 07:14:27 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:03.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.568 --rc genhtml_branch_coverage=1 00:17:03.568 --rc genhtml_function_coverage=1 00:17:03.568 --rc genhtml_legend=1 00:17:03.568 --rc geninfo_all_blocks=1 00:17:03.568 --rc geninfo_unexecuted_blocks=1 00:17:03.568 00:17:03.568 ' 00:17:03.568 07:14:27 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:03.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.568 --rc genhtml_branch_coverage=1 00:17:03.568 --rc genhtml_function_coverage=1 00:17:03.568 --rc genhtml_legend=1 00:17:03.568 --rc geninfo_all_blocks=1 00:17:03.568 --rc geninfo_unexecuted_blocks=1 00:17:03.568 00:17:03.568 ' 00:17:03.568 07:14:27 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:03.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.568 --rc genhtml_branch_coverage=1 00:17:03.568 --rc genhtml_function_coverage=1 00:17:03.568 --rc genhtml_legend=1 00:17:03.568 --rc geninfo_all_blocks=1 00:17:03.568 --rc geninfo_unexecuted_blocks=1 00:17:03.568 00:17:03.568 ' 00:17:03.568 07:14:27 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:03.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.568 --rc genhtml_branch_coverage=1 00:17:03.568 --rc genhtml_function_coverage=1 00:17:03.568 --rc genhtml_legend=1 00:17:03.568 --rc geninfo_all_blocks=1 00:17:03.568 --rc geninfo_unexecuted_blocks=1 00:17:03.568 00:17:03.568 ' 00:17:03.568 07:14:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:03.568 07:14:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:03.568 07:14:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:17:03.568 07:14:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:03.568 07:14:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.568 07:14:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.568 ************************************ 00:17:03.568 START TEST skip_rpc 00:17:03.568 ************************************ 00:17:03.568 07:14:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:17:03.568 07:14:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56290 00:17:03.568 07:14:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:03.568 07:14:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:17:03.568 07:14:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:17:03.568 [2024-11-20 07:14:27.733702] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:03.568 [2024-11-20 07:14:27.733760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56290 ] 00:17:03.826 [2024-11-20 07:14:27.871901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.826 [2024-11-20 07:14:27.908297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.827 [2024-11-20 07:14:27.953490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56290 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56290 ']' 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56290 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.092 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56290 00:17:09.093 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.093 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.093 killing process with pid 56290 00:17:09.093 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56290' 00:17:09.093 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56290 00:17:09.093 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56290 00:17:09.093 00:17:09.093 real 0m5.225s 00:17:09.093 user 0m4.968s 00:17:09.093 sys 0m0.163s 00:17:09.093 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.093 07:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.093 ************************************ 00:17:09.093 END TEST skip_rpc 00:17:09.093 ************************************ 00:17:09.093 07:14:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:17:09.093 07:14:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:09.093 07:14:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.093 07:14:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.093 ************************************ 00:17:09.093 START TEST skip_rpc_with_json 00:17:09.093 ************************************ 00:17:09.093 07:14:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:17:09.093 07:14:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:17:09.093 07:14:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56376 00:17:09.093 07:14:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:09.093 07:14:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:09.093 07:14:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56376 00:17:09.093 07:14:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56376 ']' 00:17:09.093 07:14:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.093 07:14:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.093 07:14:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.093 07:14:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.093 07:14:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:09.093 [2024-11-20 07:14:32.993983] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:09.093 [2024-11-20 07:14:32.994057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56376 ] 00:17:09.093 [2024-11-20 07:14:33.133061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.093 [2024-11-20 07:14:33.165080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.093 [2024-11-20 07:14:33.207162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:10.026 [2024-11-20 07:14:33.867465] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:17:10.026 request: 00:17:10.026 { 00:17:10.026 "trtype": "tcp", 00:17:10.026 "method": "nvmf_get_transports", 00:17:10.026 "req_id": 1 00:17:10.026 } 00:17:10.026 Got JSON-RPC error response 00:17:10.026 response: 00:17:10.026 { 00:17:10.026 "code": -19, 00:17:10.026 "message": "No such device" 00:17:10.026 } 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:10.026 [2024-11-20 07:14:33.875546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.026 07:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:10.026 07:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.026 07:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:10.027 { 00:17:10.027 "subsystems": [ 00:17:10.027 { 00:17:10.027 "subsystem": "fsdev", 00:17:10.027 "config": [ 00:17:10.027 { 00:17:10.027 "method": "fsdev_set_opts", 00:17:10.027 "params": { 00:17:10.027 "fsdev_io_pool_size": 65535, 00:17:10.027 "fsdev_io_cache_size": 256 00:17:10.027 } 00:17:10.027 } 00:17:10.027 ] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "keyring", 00:17:10.027 "config": [] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "iobuf", 00:17:10.027 "config": [ 00:17:10.027 { 00:17:10.027 "method": "iobuf_set_options", 00:17:10.027 "params": { 00:17:10.027 "small_pool_count": 8192, 00:17:10.027 "large_pool_count": 1024, 00:17:10.027 "small_bufsize": 8192, 00:17:10.027 "large_bufsize": 135168, 00:17:10.027 "enable_numa": false 00:17:10.027 } 00:17:10.027 } 00:17:10.027 ] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "sock", 00:17:10.027 "config": [ 00:17:10.027 { 00:17:10.027 "method": "sock_set_default_impl", 00:17:10.027 "params": { 00:17:10.027 "impl_name": "uring" 00:17:10.027 } 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "method": "sock_impl_set_options", 00:17:10.027 "params": { 00:17:10.027 "impl_name": "ssl", 00:17:10.027 "recv_buf_size": 4096, 00:17:10.027 "send_buf_size": 4096, 00:17:10.027 "enable_recv_pipe": true, 00:17:10.027 "enable_quickack": false, 00:17:10.027 "enable_placement_id": 0, 00:17:10.027 "enable_zerocopy_send_server": true, 00:17:10.027 "enable_zerocopy_send_client": false, 00:17:10.027 "zerocopy_threshold": 0, 00:17:10.027 "tls_version": 0, 00:17:10.027 "enable_ktls": false 00:17:10.027 } 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "method": "sock_impl_set_options", 00:17:10.027 "params": { 00:17:10.027 "impl_name": "posix", 00:17:10.027 "recv_buf_size": 2097152, 00:17:10.027 "send_buf_size": 2097152, 00:17:10.027 "enable_recv_pipe": true, 00:17:10.027 "enable_quickack": false, 00:17:10.027 "enable_placement_id": 0, 00:17:10.027 "enable_zerocopy_send_server": true, 00:17:10.027 "enable_zerocopy_send_client": false, 00:17:10.027 "zerocopy_threshold": 0, 00:17:10.027 "tls_version": 0, 00:17:10.027 "enable_ktls": false 00:17:10.027 } 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "method": "sock_impl_set_options", 00:17:10.027 "params": { 00:17:10.027 "impl_name": "uring", 00:17:10.027 "recv_buf_size": 2097152, 00:17:10.027 "send_buf_size": 2097152, 00:17:10.027 "enable_recv_pipe": true, 00:17:10.027 "enable_quickack": false, 00:17:10.027 "enable_placement_id": 0, 00:17:10.027 "enable_zerocopy_send_server": false, 00:17:10.027 "enable_zerocopy_send_client": false, 00:17:10.027 "zerocopy_threshold": 0, 00:17:10.027 "tls_version": 0, 00:17:10.027 "enable_ktls": false 00:17:10.027 } 00:17:10.027 } 00:17:10.027 ] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "vmd", 00:17:10.027 "config": [] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "accel", 00:17:10.027 "config": [ 00:17:10.027 { 00:17:10.027 "method": "accel_set_options", 00:17:10.027 "params": { 00:17:10.027 "small_cache_size": 128, 00:17:10.027 "large_cache_size": 16, 00:17:10.027 "task_count": 2048, 00:17:10.027 "sequence_count": 2048, 00:17:10.027 "buf_count": 2048 00:17:10.027 } 00:17:10.027 } 00:17:10.027 ] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "bdev", 00:17:10.027 "config": [ 00:17:10.027 { 00:17:10.027 "method": "bdev_set_options", 00:17:10.027 "params": { 00:17:10.027 "bdev_io_pool_size": 65535, 00:17:10.027 "bdev_io_cache_size": 256, 00:17:10.027 "bdev_auto_examine": true, 00:17:10.027 "iobuf_small_cache_size": 128, 00:17:10.027 "iobuf_large_cache_size": 16 00:17:10.027 } 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "method": "bdev_raid_set_options", 00:17:10.027 "params": { 00:17:10.027 "process_window_size_kb": 1024, 00:17:10.027 "process_max_bandwidth_mb_sec": 0 00:17:10.027 } 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "method": "bdev_iscsi_set_options", 00:17:10.027 "params": { 00:17:10.027 "timeout_sec": 30 00:17:10.027 } 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "method": "bdev_nvme_set_options", 00:17:10.027 "params": { 00:17:10.027 "action_on_timeout": "none", 00:17:10.027 "timeout_us": 0, 00:17:10.027 "timeout_admin_us": 0, 00:17:10.027 "keep_alive_timeout_ms": 10000, 00:17:10.027 "arbitration_burst": 0, 00:17:10.027 "low_priority_weight": 0, 00:17:10.027 "medium_priority_weight": 0, 00:17:10.027 "high_priority_weight": 0, 00:17:10.027 "nvme_adminq_poll_period_us": 10000, 00:17:10.027 "nvme_ioq_poll_period_us": 0, 00:17:10.027 "io_queue_requests": 0, 00:17:10.027 "delay_cmd_submit": true, 00:17:10.027 "transport_retry_count": 4, 00:17:10.027 "bdev_retry_count": 3, 00:17:10.027 "transport_ack_timeout": 0, 00:17:10.027 "ctrlr_loss_timeout_sec": 0, 00:17:10.027 "reconnect_delay_sec": 0, 00:17:10.027 "fast_io_fail_timeout_sec": 0, 00:17:10.027 "disable_auto_failback": false, 00:17:10.027 "generate_uuids": false, 00:17:10.027 "transport_tos": 0, 00:17:10.027 "nvme_error_stat": false, 00:17:10.027 "rdma_srq_size": 0, 00:17:10.027 "io_path_stat": false, 00:17:10.027 "allow_accel_sequence": false, 00:17:10.027 "rdma_max_cq_size": 0, 00:17:10.027 "rdma_cm_event_timeout_ms": 0, 00:17:10.027 "dhchap_digests": [ 00:17:10.027 "sha256", 00:17:10.027 "sha384", 00:17:10.027 "sha512" 00:17:10.027 ], 00:17:10.027 "dhchap_dhgroups": [ 00:17:10.027 "null", 00:17:10.027 "ffdhe2048", 00:17:10.027 "ffdhe3072", 00:17:10.027 "ffdhe4096", 00:17:10.027 "ffdhe6144", 00:17:10.027 "ffdhe8192" 00:17:10.027 ] 00:17:10.027 } 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "method": "bdev_nvme_set_hotplug", 00:17:10.027 "params": { 00:17:10.027 "period_us": 100000, 00:17:10.027 "enable": false 00:17:10.027 } 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "method": "bdev_wait_for_examine" 00:17:10.027 } 00:17:10.027 ] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "scsi", 00:17:10.027 "config": null 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "scheduler", 00:17:10.027 "config": [ 00:17:10.027 { 00:17:10.027 "method": "framework_set_scheduler", 00:17:10.027 "params": { 00:17:10.027 "name": "static" 00:17:10.027 } 00:17:10.027 } 00:17:10.027 ] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "vhost_scsi", 00:17:10.027 "config": [] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "vhost_blk", 00:17:10.027 "config": [] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "ublk", 00:17:10.027 "config": [] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "nbd", 00:17:10.027 "config": [] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "nvmf", 00:17:10.027 "config": [ 00:17:10.027 { 00:17:10.027 "method": "nvmf_set_config", 00:17:10.027 "params": { 00:17:10.027 "discovery_filter": "match_any", 00:17:10.027 "admin_cmd_passthru": { 00:17:10.027 "identify_ctrlr": false 00:17:10.027 }, 00:17:10.027 "dhchap_digests": [ 00:17:10.027 "sha256", 00:17:10.027 "sha384", 00:17:10.027 "sha512" 00:17:10.027 ], 00:17:10.027 "dhchap_dhgroups": [ 00:17:10.027 "null", 00:17:10.027 "ffdhe2048", 00:17:10.027 "ffdhe3072", 00:17:10.027 "ffdhe4096", 00:17:10.027 "ffdhe6144", 00:17:10.027 "ffdhe8192" 00:17:10.027 ] 00:17:10.027 } 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "method": "nvmf_set_max_subsystems", 00:17:10.027 "params": { 00:17:10.027 "max_subsystems": 1024 00:17:10.027 } 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "method": "nvmf_set_crdt", 00:17:10.027 "params": { 00:17:10.027 "crdt1": 0, 00:17:10.027 "crdt2": 0, 00:17:10.027 "crdt3": 0 00:17:10.027 } 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "method": "nvmf_create_transport", 00:17:10.027 "params": { 00:17:10.027 "trtype": "TCP", 00:17:10.027 "max_queue_depth": 128, 00:17:10.027 "max_io_qpairs_per_ctrlr": 127, 00:17:10.027 "in_capsule_data_size": 4096, 00:17:10.027 "max_io_size": 131072, 00:17:10.027 "io_unit_size": 131072, 00:17:10.027 "max_aq_depth": 128, 00:17:10.027 "num_shared_buffers": 511, 00:17:10.027 "buf_cache_size": 4294967295, 00:17:10.027 "dif_insert_or_strip": false, 00:17:10.027 "zcopy": false, 00:17:10.027 "c2h_success": true, 00:17:10.027 "sock_priority": 0, 00:17:10.027 "abort_timeout_sec": 1, 00:17:10.027 "ack_timeout": 0, 00:17:10.027 "data_wr_pool_size": 0 00:17:10.027 } 00:17:10.027 } 00:17:10.027 ] 00:17:10.027 }, 00:17:10.027 { 00:17:10.027 "subsystem": "iscsi", 00:17:10.027 "config": [ 00:17:10.027 { 00:17:10.027 "method": "iscsi_set_options", 00:17:10.027 "params": { 00:17:10.027 "node_base": "iqn.2016-06.io.spdk", 00:17:10.027 "max_sessions": 128, 00:17:10.027 "max_connections_per_session": 2, 00:17:10.027 "max_queue_depth": 64, 00:17:10.027 "default_time2wait": 2, 00:17:10.027 "default_time2retain": 20, 00:17:10.027 "first_burst_length": 8192, 00:17:10.027 "immediate_data": true, 00:17:10.027 "allow_duplicated_isid": false, 00:17:10.027 "error_recovery_level": 0, 00:17:10.027 "nop_timeout": 60, 00:17:10.027 "nop_in_interval": 30, 00:17:10.027 "disable_chap": false, 00:17:10.027 "require_chap": false, 00:17:10.027 "mutual_chap": false, 00:17:10.027 "chap_group": 0, 00:17:10.027 "max_large_datain_per_connection": 64, 00:17:10.027 "max_r2t_per_connection": 4, 00:17:10.027 "pdu_pool_size": 36864, 00:17:10.027 "immediate_data_pool_size": 16384, 00:17:10.027 "data_out_pool_size": 2048 00:17:10.027 } 00:17:10.027 } 00:17:10.027 ] 00:17:10.027 } 00:17:10.027 ] 00:17:10.027 } 00:17:10.027 07:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:10.027 07:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56376 00:17:10.027 07:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56376 ']' 00:17:10.027 07:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56376 00:17:10.027 07:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:17:10.027 07:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.027 07:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56376 00:17:10.027 07:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.027 07:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.027 07:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56376' 00:17:10.027 killing process with pid 56376 00:17:10.027 07:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56376 00:17:10.027 07:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56376 00:17:10.285 07:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:10.285 07:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56398 00:17:10.285 07:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:17:15.542 07:14:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56398 00:17:15.542 07:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56398 ']' 00:17:15.542 07:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56398 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56398 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.543 killing process with pid 56398 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56398' 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56398 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56398 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:15.543 00:17:15.543 real 0m6.514s 00:17:15.543 user 0m6.358s 00:17:15.543 sys 0m0.395s 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.543 ************************************ 00:17:15.543 END TEST skip_rpc_with_json 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:15.543 ************************************ 00:17:15.543 07:14:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:17:15.543 07:14:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:15.543 07:14:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.543 07:14:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.543 ************************************ 00:17:15.543 START TEST skip_rpc_with_delay 00:17:15.543 ************************************ 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:15.543 [2024-11-20 07:14:39.548278] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.543 00:17:15.543 real 0m0.054s 00:17:15.543 user 0m0.037s 00:17:15.543 sys 0m0.016s 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.543 07:14:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:17:15.543 ************************************ 00:17:15.543 END TEST skip_rpc_with_delay 00:17:15.543 ************************************ 00:17:15.543 07:14:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:17:15.543 07:14:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:17:15.543 07:14:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:17:15.543 07:14:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:15.543 07:14:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.543 07:14:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.543 ************************************ 00:17:15.543 START TEST exit_on_failed_rpc_init 00:17:15.543 ************************************ 00:17:15.543 07:14:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:17:15.543 07:14:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56502 00:17:15.543 07:14:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56502 00:17:15.543 07:14:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 56502 ']' 00:17:15.543 07:14:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.543 07:14:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.543 07:14:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.543 07:14:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:15.543 07:14:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.543 07:14:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:15.543 [2024-11-20 07:14:39.641586] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:15.543 [2024-11-20 07:14:39.641648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56502 ] 00:17:15.800 [2024-11-20 07:14:39.777767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.800 [2024-11-20 07:14:39.810052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.800 [2024-11-20 07:14:39.850061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:17:16.364 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:16.364 [2024-11-20 07:14:40.563002] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:16.364 [2024-11-20 07:14:40.563061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56520 ] 00:17:16.621 [2024-11-20 07:14:40.708565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.621 [2024-11-20 07:14:40.750497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.621 [2024-11-20 07:14:40.750564] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:16.621 [2024-11-20 07:14:40.750573] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:16.621 [2024-11-20 07:14:40.750579] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56502 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 56502 ']' 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 56502 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.621 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56502 00:17:16.879 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.879 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.879 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56502' 00:17:16.879 killing process with pid 56502 00:17:16.879 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 56502 00:17:16.879 07:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 56502 00:17:16.879 00:17:16.879 real 0m1.401s 00:17:16.879 user 0m1.643s 00:17:16.879 sys 0m0.245s 00:17:16.879 07:14:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.879 07:14:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:16.879 ************************************ 00:17:16.879 END TEST exit_on_failed_rpc_init 00:17:16.879 ************************************ 00:17:16.879 07:14:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:16.879 00:17:16.879 real 0m13.487s 00:17:16.879 user 0m13.139s 00:17:16.879 sys 0m0.984s 00:17:16.879 07:14:41 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.879 07:14:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.879 ************************************ 00:17:16.879 END TEST skip_rpc 00:17:16.879 ************************************ 00:17:16.879 07:14:41 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:16.879 07:14:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.879 07:14:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.879 07:14:41 -- common/autotest_common.sh@10 -- # set +x 00:17:16.879 ************************************ 00:17:16.879 START TEST rpc_client 00:17:16.879 ************************************ 00:17:16.879 07:14:41 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:17.136 * Looking for test storage... 00:17:17.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:17:17.136 07:14:41 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:17.136 07:14:41 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:17:17.136 07:14:41 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:17.136 07:14:41 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@345 -- # : 1 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.136 07:14:41 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:17:17.137 07:14:41 rpc_client -- scripts/common.sh@353 -- # local d=1 00:17:17.137 07:14:41 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.137 07:14:41 rpc_client -- scripts/common.sh@355 -- # echo 1 00:17:17.137 07:14:41 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.137 07:14:41 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:17:17.137 07:14:41 rpc_client -- scripts/common.sh@353 -- # local d=2 00:17:17.137 07:14:41 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.137 07:14:41 rpc_client -- scripts/common.sh@355 -- # echo 2 00:17:17.137 07:14:41 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.137 07:14:41 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.137 07:14:41 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.137 07:14:41 rpc_client -- scripts/common.sh@368 -- # return 0 00:17:17.137 07:14:41 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.137 07:14:41 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:17.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.137 --rc genhtml_branch_coverage=1 00:17:17.137 --rc genhtml_function_coverage=1 00:17:17.137 --rc genhtml_legend=1 00:17:17.137 --rc geninfo_all_blocks=1 00:17:17.137 --rc geninfo_unexecuted_blocks=1 00:17:17.137 00:17:17.137 ' 00:17:17.137 07:14:41 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:17.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.137 --rc genhtml_branch_coverage=1 00:17:17.137 --rc genhtml_function_coverage=1 00:17:17.137 --rc genhtml_legend=1 00:17:17.137 --rc geninfo_all_blocks=1 00:17:17.137 --rc geninfo_unexecuted_blocks=1 00:17:17.137 00:17:17.137 ' 00:17:17.137 07:14:41 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:17.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.137 --rc genhtml_branch_coverage=1 00:17:17.137 --rc genhtml_function_coverage=1 00:17:17.137 --rc genhtml_legend=1 00:17:17.137 --rc geninfo_all_blocks=1 00:17:17.137 --rc geninfo_unexecuted_blocks=1 00:17:17.137 00:17:17.137 ' 00:17:17.137 07:14:41 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:17.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.137 --rc genhtml_branch_coverage=1 00:17:17.137 --rc genhtml_function_coverage=1 00:17:17.137 --rc genhtml_legend=1 00:17:17.137 --rc geninfo_all_blocks=1 00:17:17.137 --rc geninfo_unexecuted_blocks=1 00:17:17.137 00:17:17.137 ' 00:17:17.137 07:14:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:17:17.137 OK 00:17:17.137 07:14:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:17:17.137 00:17:17.137 real 0m0.141s 00:17:17.137 user 0m0.088s 00:17:17.137 sys 0m0.060s 00:17:17.137 07:14:41 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.137 07:14:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:17:17.137 ************************************ 00:17:17.137 END TEST rpc_client 00:17:17.137 ************************************ 00:17:17.137 07:14:41 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:17.137 07:14:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:17.137 07:14:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.137 07:14:41 -- common/autotest_common.sh@10 -- # set +x 00:17:17.137 ************************************ 00:17:17.137 START TEST json_config 00:17:17.137 ************************************ 00:17:17.137 07:14:41 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:17.137 07:14:41 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:17.137 07:14:41 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:17.137 07:14:41 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:17:17.395 07:14:41 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:17.395 07:14:41 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.395 07:14:41 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.395 07:14:41 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.395 07:14:41 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.395 07:14:41 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.395 07:14:41 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.395 07:14:41 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.395 07:14:41 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.395 07:14:41 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.395 07:14:41 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.395 07:14:41 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.395 07:14:41 json_config -- scripts/common.sh@344 -- # case "$op" in 00:17:17.395 07:14:41 json_config -- scripts/common.sh@345 -- # : 1 00:17:17.395 07:14:41 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.395 07:14:41 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.395 07:14:41 json_config -- scripts/common.sh@365 -- # decimal 1 00:17:17.395 07:14:41 json_config -- scripts/common.sh@353 -- # local d=1 00:17:17.395 07:14:41 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.395 07:14:41 json_config -- scripts/common.sh@355 -- # echo 1 00:17:17.395 07:14:41 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.395 07:14:41 json_config -- scripts/common.sh@366 -- # decimal 2 00:17:17.395 07:14:41 json_config -- scripts/common.sh@353 -- # local d=2 00:17:17.395 07:14:41 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.395 07:14:41 json_config -- scripts/common.sh@355 -- # echo 2 00:17:17.395 07:14:41 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.395 07:14:41 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.395 07:14:41 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.395 07:14:41 json_config -- scripts/common.sh@368 -- # return 0 00:17:17.395 07:14:41 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.395 07:14:41 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:17.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.395 --rc genhtml_branch_coverage=1 00:17:17.395 --rc genhtml_function_coverage=1 00:17:17.395 --rc genhtml_legend=1 00:17:17.395 --rc geninfo_all_blocks=1 00:17:17.395 --rc geninfo_unexecuted_blocks=1 00:17:17.395 00:17:17.395 ' 00:17:17.395 07:14:41 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:17.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.395 --rc genhtml_branch_coverage=1 00:17:17.395 --rc genhtml_function_coverage=1 00:17:17.395 --rc genhtml_legend=1 00:17:17.395 --rc geninfo_all_blocks=1 00:17:17.395 --rc geninfo_unexecuted_blocks=1 00:17:17.395 00:17:17.395 ' 00:17:17.395 07:14:41 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:17.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.395 --rc genhtml_branch_coverage=1 00:17:17.395 --rc genhtml_function_coverage=1 00:17:17.395 --rc genhtml_legend=1 00:17:17.395 --rc geninfo_all_blocks=1 00:17:17.395 --rc geninfo_unexecuted_blocks=1 00:17:17.395 00:17:17.395 ' 00:17:17.395 07:14:41 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:17.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.395 --rc genhtml_branch_coverage=1 00:17:17.395 --rc genhtml_function_coverage=1 00:17:17.395 --rc genhtml_legend=1 00:17:17.395 --rc geninfo_all_blocks=1 00:17:17.395 --rc geninfo_unexecuted_blocks=1 00:17:17.395 00:17:17.395 ' 00:17:17.395 07:14:41 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.395 07:14:41 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:17.395 07:14:41 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:17:17.395 07:14:41 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.395 07:14:41 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.395 07:14:41 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.396 07:14:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.396 07:14:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.396 07:14:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.396 07:14:41 json_config -- paths/export.sh@5 -- # export PATH 00:17:17.396 07:14:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.396 07:14:41 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:17.396 07:14:41 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:17.396 07:14:41 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:17.396 07:14:41 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:17.396 07:14:41 json_config -- nvmf/common.sh@50 -- # : 0 00:17:17.396 07:14:41 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:17.396 07:14:41 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:17.396 07:14:41 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:17.396 07:14:41 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.396 07:14:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.396 07:14:41 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:17.396 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:17.396 07:14:41 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:17.396 07:14:41 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:17.396 07:14:41 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:17:17.396 INFO: JSON configuration test init 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:17:17.396 07:14:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.396 07:14:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:17:17.396 07:14:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.396 07:14:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:17.396 07:14:41 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:17:17.396 07:14:41 json_config -- json_config/common.sh@9 -- # local app=target 00:17:17.396 07:14:41 json_config -- json_config/common.sh@10 -- # shift 00:17:17.396 07:14:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:17.396 07:14:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:17.396 07:14:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:17:17.396 07:14:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:17.396 07:14:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:17.396 07:14:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=56654 00:17:17.396 Waiting for target to run... 00:17:17.396 07:14:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:17.396 07:14:41 json_config -- json_config/common.sh@25 -- # waitforlisten 56654 /var/tmp/spdk_tgt.sock 00:17:17.396 07:14:41 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:17:17.396 07:14:41 json_config -- common/autotest_common.sh@835 -- # '[' -z 56654 ']' 00:17:17.396 07:14:41 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:17.396 07:14:41 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:17.396 07:14:41 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:17.396 07:14:41 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.396 07:14:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:17.396 [2024-11-20 07:14:41.444322] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:17.396 [2024-11-20 07:14:41.444387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56654 ] 00:17:17.653 [2024-11-20 07:14:41.741959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.654 [2024-11-20 07:14:41.770356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.246 07:14:42 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.246 00:17:18.246 07:14:42 json_config -- common/autotest_common.sh@868 -- # return 0 00:17:18.246 07:14:42 json_config -- json_config/common.sh@26 -- # echo '' 00:17:18.246 07:14:42 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:17:18.246 07:14:42 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:17:18.246 07:14:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.246 07:14:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:18.246 07:14:42 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:17:18.246 07:14:42 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:17:18.246 07:14:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.246 07:14:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:18.246 07:14:42 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:17:18.246 07:14:42 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:17:18.246 07:14:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:17:18.502 [2024-11-20 07:14:42.572963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:18.760 07:14:42 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:17:18.760 07:14:42 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:17:18.760 07:14:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.760 07:14:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:18.760 07:14:42 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:17:18.760 07:14:42 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:17:18.760 07:14:42 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:17:18.760 07:14:42 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:17:18.760 07:14:42 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:17:18.760 07:14:42 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:17:18.760 07:14:42 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:17:18.760 07:14:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:17:19.018 07:14:42 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:17:19.018 07:14:42 json_config -- json_config/json_config.sh@51 -- # local get_types 00:17:19.018 07:14:42 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:17:19.018 07:14:42 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:17:19.018 07:14:42 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:17:19.018 07:14:42 json_config -- json_config/json_config.sh@54 -- # sort 00:17:19.018 07:14:42 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:17:19.018 07:14:42 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:17:19.018 07:14:43 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:17:19.018 07:14:43 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:17:19.018 07:14:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:19.018 07:14:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:19.018 07:14:43 json_config -- json_config/json_config.sh@62 -- # return 0 00:17:19.018 07:14:43 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:17:19.018 07:14:43 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:17:19.018 07:14:43 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:17:19.019 07:14:43 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:17:19.019 07:14:43 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:17:19.019 07:14:43 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:17:19.019 07:14:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.019 07:14:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:19.019 07:14:43 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:17:19.019 07:14:43 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:17:19.019 07:14:43 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:17:19.019 07:14:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:17:19.019 07:14:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:17:19.278 MallocForNvmf0 00:17:19.278 07:14:43 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:17:19.278 07:14:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:17:19.278 MallocForNvmf1 00:17:19.278 07:14:43 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:17:19.278 07:14:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:17:19.536 [2024-11-20 07:14:43.624143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.536 07:14:43 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:19.536 07:14:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:19.794 07:14:43 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:17:19.794 07:14:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:17:20.052 07:14:44 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:17:20.052 07:14:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:17:20.309 07:14:44 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:17:20.309 07:14:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:17:20.309 [2024-11-20 07:14:44.436519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:17:20.309 07:14:44 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:17:20.310 07:14:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.310 07:14:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:20.310 07:14:44 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:17:20.310 07:14:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.310 07:14:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:20.567 07:14:44 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:17:20.567 07:14:44 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:17:20.567 07:14:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:17:20.567 MallocBdevForConfigChangeCheck 00:17:20.567 07:14:44 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:17:20.567 07:14:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.567 07:14:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:20.567 07:14:44 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:17:20.567 07:14:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:17:21.132 INFO: shutting down applications... 00:17:21.132 07:14:45 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:17:21.132 07:14:45 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:17:21.132 07:14:45 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:17:21.132 07:14:45 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:17:21.132 07:14:45 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:17:21.390 Calling clear_iscsi_subsystem 00:17:21.390 Calling clear_nvmf_subsystem 00:17:21.390 Calling clear_nbd_subsystem 00:17:21.390 Calling clear_ublk_subsystem 00:17:21.390 Calling clear_vhost_blk_subsystem 00:17:21.390 Calling clear_vhost_scsi_subsystem 00:17:21.390 Calling clear_bdev_subsystem 00:17:21.390 07:14:45 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:17:21.390 07:14:45 json_config -- json_config/json_config.sh@350 -- # count=100 00:17:21.390 07:14:45 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:17:21.390 07:14:45 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:17:21.390 07:14:45 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:17:21.390 07:14:45 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:17:21.648 07:14:45 json_config -- json_config/json_config.sh@352 -- # break 00:17:21.648 07:14:45 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:17:21.648 07:14:45 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:17:21.648 07:14:45 json_config -- json_config/common.sh@31 -- # local app=target 00:17:21.648 07:14:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:17:21.648 07:14:45 json_config -- json_config/common.sh@35 -- # [[ -n 56654 ]] 00:17:21.648 07:14:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 56654 00:17:21.648 07:14:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:17:21.648 07:14:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:21.648 07:14:45 json_config -- json_config/common.sh@41 -- # kill -0 56654 00:17:21.648 07:14:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:17:22.216 07:14:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:17:22.216 07:14:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:22.216 07:14:46 json_config -- json_config/common.sh@41 -- # kill -0 56654 00:17:22.216 07:14:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:17:22.216 07:14:46 json_config -- json_config/common.sh@43 -- # break 00:17:22.216 07:14:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:17:22.216 SPDK target shutdown done 00:17:22.216 07:14:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:17:22.216 INFO: relaunching applications... 00:17:22.216 07:14:46 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:17:22.216 07:14:46 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:22.216 07:14:46 json_config -- json_config/common.sh@9 -- # local app=target 00:17:22.216 07:14:46 json_config -- json_config/common.sh@10 -- # shift 00:17:22.216 07:14:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:22.216 07:14:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:22.216 07:14:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:17:22.216 07:14:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:22.216 07:14:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:22.216 07:14:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=56843 00:17:22.216 Waiting for target to run... 00:17:22.216 07:14:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:22.216 07:14:46 json_config -- json_config/common.sh@25 -- # waitforlisten 56843 /var/tmp/spdk_tgt.sock 00:17:22.216 07:14:46 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:22.216 07:14:46 json_config -- common/autotest_common.sh@835 -- # '[' -z 56843 ']' 00:17:22.216 07:14:46 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:22.216 07:14:46 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:22.216 07:14:46 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:22.216 07:14:46 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.216 07:14:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:22.216 [2024-11-20 07:14:46.273263] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:22.216 [2024-11-20 07:14:46.273328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56843 ] 00:17:22.474 [2024-11-20 07:14:46.559119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.474 [2024-11-20 07:14:46.587852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.731 [2024-11-20 07:14:46.724859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:22.989 [2024-11-20 07:14:46.932825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.989 [2024-11-20 07:14:46.964918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:17:23.246 07:14:47 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.246 07:14:47 json_config -- common/autotest_common.sh@868 -- # return 0 00:17:23.246 00:17:23.246 07:14:47 json_config -- json_config/common.sh@26 -- # echo '' 00:17:23.246 07:14:47 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:17:23.246 INFO: Checking if target configuration is the same... 00:17:23.246 07:14:47 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:17:23.246 07:14:47 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:23.246 07:14:47 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:17:23.246 07:14:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:17:23.246 + '[' 2 -ne 2 ']' 00:17:23.246 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:17:23.246 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:17:23.246 + rootdir=/home/vagrant/spdk_repo/spdk 00:17:23.246 +++ basename /dev/fd/62 00:17:23.246 ++ mktemp /tmp/62.XXX 00:17:23.246 + tmp_file_1=/tmp/62.ljT 00:17:23.246 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:23.246 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:17:23.246 + tmp_file_2=/tmp/spdk_tgt_config.json.42x 00:17:23.246 + ret=0 00:17:23.246 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:17:23.504 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:17:23.504 + diff -u /tmp/62.ljT /tmp/spdk_tgt_config.json.42x 00:17:23.504 + echo 'INFO: JSON config files are the same' 00:17:23.504 INFO: JSON config files are the same 00:17:23.504 + rm /tmp/62.ljT /tmp/spdk_tgt_config.json.42x 00:17:23.504 + exit 0 00:17:23.504 07:14:47 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:17:23.504 INFO: changing configuration and checking if this can be detected... 00:17:23.504 07:14:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:17:23.504 07:14:47 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:17:23.504 07:14:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:17:23.761 07:14:47 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:23.761 07:14:47 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:17:23.761 07:14:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:17:23.761 + '[' 2 -ne 2 ']' 00:17:23.761 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:17:23.762 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:17:23.762 + rootdir=/home/vagrant/spdk_repo/spdk 00:17:23.762 +++ basename /dev/fd/62 00:17:23.762 ++ mktemp /tmp/62.XXX 00:17:23.762 + tmp_file_1=/tmp/62.Qn8 00:17:23.762 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:23.762 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:17:23.762 + tmp_file_2=/tmp/spdk_tgt_config.json.zZU 00:17:23.762 + ret=0 00:17:23.762 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:17:24.327 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:17:24.327 + diff -u /tmp/62.Qn8 /tmp/spdk_tgt_config.json.zZU 00:17:24.327 + ret=1 00:17:24.327 + echo '=== Start of file: /tmp/62.Qn8 ===' 00:17:24.327 + cat /tmp/62.Qn8 00:17:24.327 + echo '=== End of file: /tmp/62.Qn8 ===' 00:17:24.327 + echo '' 00:17:24.327 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zZU ===' 00:17:24.327 + cat /tmp/spdk_tgt_config.json.zZU 00:17:24.327 + echo '=== End of file: /tmp/spdk_tgt_config.json.zZU ===' 00:17:24.327 + echo '' 00:17:24.327 + rm /tmp/62.Qn8 /tmp/spdk_tgt_config.json.zZU 00:17:24.327 + exit 1 00:17:24.327 INFO: configuration change detected. 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@324 -- # [[ -n 56843 ]] 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@200 -- # uname -s 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@330 -- # killprocess 56843 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@954 -- # '[' -z 56843 ']' 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@958 -- # kill -0 56843 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@959 -- # uname 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56843 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:24.327 killing process with pid 56843 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56843' 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@973 -- # kill 56843 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@978 -- # wait 56843 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@335 -- # return 0 00:17:24.327 INFO: Success 00:17:24.327 07:14:48 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:17:24.327 00:17:24.327 real 0m7.270s 00:17:24.327 user 0m10.277s 00:17:24.327 sys 0m1.102s 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.327 07:14:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:24.327 ************************************ 00:17:24.327 END TEST json_config 00:17:24.327 ************************************ 00:17:24.585 07:14:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:24.585 07:14:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:24.585 07:14:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.585 07:14:48 -- common/autotest_common.sh@10 -- # set +x 00:17:24.585 ************************************ 00:17:24.585 START TEST json_config_extra_key 00:17:24.585 ************************************ 00:17:24.585 07:14:48 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:24.585 07:14:48 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:24.585 07:14:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:24.585 07:14:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:17:24.585 07:14:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:17:24.585 07:14:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:17:24.586 07:14:48 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.586 07:14:48 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:24.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.586 --rc genhtml_branch_coverage=1 00:17:24.586 --rc genhtml_function_coverage=1 00:17:24.586 --rc genhtml_legend=1 00:17:24.586 --rc geninfo_all_blocks=1 00:17:24.586 --rc geninfo_unexecuted_blocks=1 00:17:24.586 00:17:24.586 ' 00:17:24.586 07:14:48 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:24.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.586 --rc genhtml_branch_coverage=1 00:17:24.586 --rc genhtml_function_coverage=1 00:17:24.586 --rc genhtml_legend=1 00:17:24.586 --rc geninfo_all_blocks=1 00:17:24.586 --rc geninfo_unexecuted_blocks=1 00:17:24.586 00:17:24.586 ' 00:17:24.586 07:14:48 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:24.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.586 --rc genhtml_branch_coverage=1 00:17:24.586 --rc genhtml_function_coverage=1 00:17:24.586 --rc genhtml_legend=1 00:17:24.586 --rc geninfo_all_blocks=1 00:17:24.586 --rc geninfo_unexecuted_blocks=1 00:17:24.586 00:17:24.586 ' 00:17:24.586 07:14:48 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:24.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.586 --rc genhtml_branch_coverage=1 00:17:24.586 --rc genhtml_function_coverage=1 00:17:24.586 --rc genhtml_legend=1 00:17:24.586 --rc geninfo_all_blocks=1 00:17:24.586 --rc geninfo_unexecuted_blocks=1 00:17:24.586 00:17:24.586 ' 00:17:24.586 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.586 07:14:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.586 07:14:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.586 07:14:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.586 07:14:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.586 07:14:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:17:24.586 07:14:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:24.586 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:24.586 07:14:48 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:24.586 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:24.586 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:17:24.586 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:17:24.586 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:17:24.586 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:17:24.586 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:17:24.587 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:17:24.587 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:17:24.587 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:17:24.587 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:17:24.587 INFO: launching applications... 00:17:24.587 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:17:24.587 07:14:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:24.587 07:14:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:17:24.587 07:14:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:17:24.587 07:14:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:24.587 07:14:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:24.587 07:14:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:17:24.587 07:14:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:24.587 07:14:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:24.587 07:14:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=56987 00:17:24.587 Waiting for target to run... 00:17:24.587 07:14:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:24.587 07:14:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 56987 /var/tmp/spdk_tgt.sock 00:17:24.587 07:14:48 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 56987 ']' 00:17:24.587 07:14:48 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:24.587 07:14:48 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:24.587 07:14:48 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:24.587 07:14:48 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.587 07:14:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:24.587 07:14:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:24.587 [2024-11-20 07:14:48.727093] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:24.587 [2024-11-20 07:14:48.727154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56987 ] 00:17:24.844 [2024-11-20 07:14:49.023620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.102 [2024-11-20 07:14:49.051580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.102 [2024-11-20 07:14:49.083010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:25.667 07:14:49 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.667 07:14:49 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:17:25.667 00:17:25.667 07:14:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:17:25.667 INFO: shutting down applications... 00:17:25.667 07:14:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:17:25.667 07:14:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:17:25.667 07:14:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:17:25.667 07:14:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:17:25.667 07:14:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 56987 ]] 00:17:25.667 07:14:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 56987 00:17:25.667 07:14:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:17:25.667 07:14:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:25.667 07:14:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56987 00:17:25.667 07:14:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:25.924 07:14:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:25.924 07:14:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:25.924 07:14:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56987 00:17:25.924 07:14:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:17:25.924 07:14:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:17:25.924 07:14:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:17:25.924 07:14:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:17:25.924 SPDK target shutdown done 00:17:25.924 Success 00:17:25.924 07:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:17:25.924 00:17:25.925 real 0m1.543s 00:17:25.925 user 0m1.260s 00:17:25.925 sys 0m0.267s 00:17:25.925 07:14:50 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.925 07:14:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:25.925 ************************************ 00:17:25.925 END TEST json_config_extra_key 00:17:25.925 ************************************ 00:17:26.183 07:14:50 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:26.183 07:14:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:26.183 07:14:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.183 07:14:50 -- common/autotest_common.sh@10 -- # set +x 00:17:26.183 ************************************ 00:17:26.183 START TEST alias_rpc 00:17:26.183 ************************************ 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:26.183 * Looking for test storage... 00:17:26.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@345 -- # : 1 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:26.183 07:14:50 alias_rpc -- scripts/common.sh@368 -- # return 0 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:26.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.183 --rc genhtml_branch_coverage=1 00:17:26.183 --rc genhtml_function_coverage=1 00:17:26.183 --rc genhtml_legend=1 00:17:26.183 --rc geninfo_all_blocks=1 00:17:26.183 --rc geninfo_unexecuted_blocks=1 00:17:26.183 00:17:26.183 ' 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:26.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.183 --rc genhtml_branch_coverage=1 00:17:26.183 --rc genhtml_function_coverage=1 00:17:26.183 --rc genhtml_legend=1 00:17:26.183 --rc geninfo_all_blocks=1 00:17:26.183 --rc geninfo_unexecuted_blocks=1 00:17:26.183 00:17:26.183 ' 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:26.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.183 --rc genhtml_branch_coverage=1 00:17:26.183 --rc genhtml_function_coverage=1 00:17:26.183 --rc genhtml_legend=1 00:17:26.183 --rc geninfo_all_blocks=1 00:17:26.183 --rc geninfo_unexecuted_blocks=1 00:17:26.183 00:17:26.183 ' 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:26.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.183 --rc genhtml_branch_coverage=1 00:17:26.183 --rc genhtml_function_coverage=1 00:17:26.183 --rc genhtml_legend=1 00:17:26.183 --rc geninfo_all_blocks=1 00:17:26.183 --rc geninfo_unexecuted_blocks=1 00:17:26.183 00:17:26.183 ' 00:17:26.183 07:14:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:26.183 07:14:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57060 00:17:26.183 07:14:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57060 00:17:26.183 07:14:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57060 ']' 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.183 07:14:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.183 [2024-11-20 07:14:50.318641] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:26.183 [2024-11-20 07:14:50.318703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57060 ] 00:17:26.442 [2024-11-20 07:14:50.457769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.442 [2024-11-20 07:14:50.493375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.442 [2024-11-20 07:14:50.536044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:27.007 07:14:51 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.007 07:14:51 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:27.007 07:14:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:17:27.265 07:14:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57060 00:17:27.265 07:14:51 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57060 ']' 00:17:27.265 07:14:51 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57060 00:17:27.265 07:14:51 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:17:27.265 07:14:51 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.265 07:14:51 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57060 00:17:27.265 07:14:51 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.265 07:14:51 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.265 07:14:51 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57060' 00:17:27.265 killing process with pid 57060 00:17:27.265 07:14:51 alias_rpc -- common/autotest_common.sh@973 -- # kill 57060 00:17:27.265 07:14:51 alias_rpc -- common/autotest_common.sh@978 -- # wait 57060 00:17:27.537 00:17:27.537 real 0m1.481s 00:17:27.537 user 0m1.702s 00:17:27.537 sys 0m0.277s 00:17:27.537 07:14:51 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.537 07:14:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.537 ************************************ 00:17:27.537 END TEST alias_rpc 00:17:27.537 ************************************ 00:17:27.537 07:14:51 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:17:27.537 07:14:51 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:17:27.537 07:14:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:27.537 07:14:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.537 07:14:51 -- common/autotest_common.sh@10 -- # set +x 00:17:27.537 ************************************ 00:17:27.537 START TEST spdkcli_tcp 00:17:27.537 ************************************ 00:17:27.537 07:14:51 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:17:27.537 * Looking for test storage... 00:17:27.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.826 07:14:51 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:27.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.826 --rc genhtml_branch_coverage=1 00:17:27.826 --rc genhtml_function_coverage=1 00:17:27.826 --rc genhtml_legend=1 00:17:27.826 --rc geninfo_all_blocks=1 00:17:27.826 --rc geninfo_unexecuted_blocks=1 00:17:27.826 00:17:27.826 ' 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:27.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.826 --rc genhtml_branch_coverage=1 00:17:27.826 --rc genhtml_function_coverage=1 00:17:27.826 --rc genhtml_legend=1 00:17:27.826 --rc geninfo_all_blocks=1 00:17:27.826 --rc geninfo_unexecuted_blocks=1 00:17:27.826 00:17:27.826 ' 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:27.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.826 --rc genhtml_branch_coverage=1 00:17:27.826 --rc genhtml_function_coverage=1 00:17:27.826 --rc genhtml_legend=1 00:17:27.826 --rc geninfo_all_blocks=1 00:17:27.826 --rc geninfo_unexecuted_blocks=1 00:17:27.826 00:17:27.826 ' 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:27.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.826 --rc genhtml_branch_coverage=1 00:17:27.826 --rc genhtml_function_coverage=1 00:17:27.826 --rc genhtml_legend=1 00:17:27.826 --rc geninfo_all_blocks=1 00:17:27.826 --rc geninfo_unexecuted_blocks=1 00:17:27.826 00:17:27.826 ' 00:17:27.826 07:14:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:27.826 07:14:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:27.826 07:14:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:27.826 07:14:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:17:27.826 07:14:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:17:27.826 07:14:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.826 07:14:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:27.826 07:14:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57138 00:17:27.826 07:14:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57138 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57138 ']' 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.826 07:14:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.826 07:14:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:27.826 [2024-11-20 07:14:51.848798] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:27.826 [2024-11-20 07:14:51.848857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57138 ] 00:17:27.826 [2024-11-20 07:14:51.989688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:28.084 [2024-11-20 07:14:52.027675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.084 [2024-11-20 07:14:52.027694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.084 [2024-11-20 07:14:52.072020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:28.649 07:14:52 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.649 07:14:52 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:17:28.649 07:14:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57155 00:17:28.649 07:14:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:17:28.649 07:14:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:17:28.907 [ 00:17:28.907 "bdev_malloc_delete", 00:17:28.907 "bdev_malloc_create", 00:17:28.907 "bdev_null_resize", 00:17:28.907 "bdev_null_delete", 00:17:28.907 "bdev_null_create", 00:17:28.907 "bdev_nvme_cuse_unregister", 00:17:28.907 "bdev_nvme_cuse_register", 00:17:28.907 "bdev_opal_new_user", 00:17:28.907 "bdev_opal_set_lock_state", 00:17:28.907 "bdev_opal_delete", 00:17:28.907 "bdev_opal_get_info", 00:17:28.907 "bdev_opal_create", 00:17:28.907 "bdev_nvme_opal_revert", 00:17:28.907 "bdev_nvme_opal_init", 00:17:28.907 "bdev_nvme_send_cmd", 00:17:28.907 "bdev_nvme_set_keys", 00:17:28.907 "bdev_nvme_get_path_iostat", 00:17:28.907 "bdev_nvme_get_mdns_discovery_info", 00:17:28.907 "bdev_nvme_stop_mdns_discovery", 00:17:28.907 "bdev_nvme_start_mdns_discovery", 00:17:28.907 "bdev_nvme_set_multipath_policy", 00:17:28.907 "bdev_nvme_set_preferred_path", 00:17:28.907 "bdev_nvme_get_io_paths", 00:17:28.907 "bdev_nvme_remove_error_injection", 00:17:28.907 "bdev_nvme_add_error_injection", 00:17:28.907 "bdev_nvme_get_discovery_info", 00:17:28.907 "bdev_nvme_stop_discovery", 00:17:28.907 "bdev_nvme_start_discovery", 00:17:28.907 "bdev_nvme_get_controller_health_info", 00:17:28.907 "bdev_nvme_disable_controller", 00:17:28.907 "bdev_nvme_enable_controller", 00:17:28.907 "bdev_nvme_reset_controller", 00:17:28.907 "bdev_nvme_get_transport_statistics", 00:17:28.907 "bdev_nvme_apply_firmware", 00:17:28.907 "bdev_nvme_detach_controller", 00:17:28.907 "bdev_nvme_get_controllers", 00:17:28.907 "bdev_nvme_attach_controller", 00:17:28.907 "bdev_nvme_set_hotplug", 00:17:28.907 "bdev_nvme_set_options", 00:17:28.907 "bdev_passthru_delete", 00:17:28.907 "bdev_passthru_create", 00:17:28.907 "bdev_lvol_set_parent_bdev", 00:17:28.907 "bdev_lvol_set_parent", 00:17:28.907 "bdev_lvol_check_shallow_copy", 00:17:28.907 "bdev_lvol_start_shallow_copy", 00:17:28.907 "bdev_lvol_grow_lvstore", 00:17:28.907 "bdev_lvol_get_lvols", 00:17:28.907 "bdev_lvol_get_lvstores", 00:17:28.907 "bdev_lvol_delete", 00:17:28.907 "bdev_lvol_set_read_only", 00:17:28.907 "bdev_lvol_resize", 00:17:28.907 "bdev_lvol_decouple_parent", 00:17:28.907 "bdev_lvol_inflate", 00:17:28.907 "bdev_lvol_rename", 00:17:28.907 "bdev_lvol_clone_bdev", 00:17:28.907 "bdev_lvol_clone", 00:17:28.907 "bdev_lvol_snapshot", 00:17:28.907 "bdev_lvol_create", 00:17:28.907 "bdev_lvol_delete_lvstore", 00:17:28.907 "bdev_lvol_rename_lvstore", 00:17:28.907 "bdev_lvol_create_lvstore", 00:17:28.907 "bdev_raid_set_options", 00:17:28.907 "bdev_raid_remove_base_bdev", 00:17:28.907 "bdev_raid_add_base_bdev", 00:17:28.907 "bdev_raid_delete", 00:17:28.907 "bdev_raid_create", 00:17:28.907 "bdev_raid_get_bdevs", 00:17:28.907 "bdev_error_inject_error", 00:17:28.907 "bdev_error_delete", 00:17:28.907 "bdev_error_create", 00:17:28.907 "bdev_split_delete", 00:17:28.907 "bdev_split_create", 00:17:28.907 "bdev_delay_delete", 00:17:28.907 "bdev_delay_create", 00:17:28.907 "bdev_delay_update_latency", 00:17:28.907 "bdev_zone_block_delete", 00:17:28.907 "bdev_zone_block_create", 00:17:28.907 "blobfs_create", 00:17:28.907 "blobfs_detect", 00:17:28.907 "blobfs_set_cache_size", 00:17:28.907 "bdev_aio_delete", 00:17:28.907 "bdev_aio_rescan", 00:17:28.907 "bdev_aio_create", 00:17:28.907 "bdev_ftl_set_property", 00:17:28.907 "bdev_ftl_get_properties", 00:17:28.907 "bdev_ftl_get_stats", 00:17:28.907 "bdev_ftl_unmap", 00:17:28.907 "bdev_ftl_unload", 00:17:28.907 "bdev_ftl_delete", 00:17:28.907 "bdev_ftl_load", 00:17:28.907 "bdev_ftl_create", 00:17:28.907 "bdev_virtio_attach_controller", 00:17:28.907 "bdev_virtio_scsi_get_devices", 00:17:28.907 "bdev_virtio_detach_controller", 00:17:28.907 "bdev_virtio_blk_set_hotplug", 00:17:28.907 "bdev_iscsi_delete", 00:17:28.907 "bdev_iscsi_create", 00:17:28.907 "bdev_iscsi_set_options", 00:17:28.907 "bdev_uring_delete", 00:17:28.907 "bdev_uring_rescan", 00:17:28.907 "bdev_uring_create", 00:17:28.907 "accel_error_inject_error", 00:17:28.907 "ioat_scan_accel_module", 00:17:28.907 "dsa_scan_accel_module", 00:17:28.907 "iaa_scan_accel_module", 00:17:28.907 "keyring_file_remove_key", 00:17:28.907 "keyring_file_add_key", 00:17:28.907 "keyring_linux_set_options", 00:17:28.907 "fsdev_aio_delete", 00:17:28.907 "fsdev_aio_create", 00:17:28.907 "iscsi_get_histogram", 00:17:28.907 "iscsi_enable_histogram", 00:17:28.907 "iscsi_set_options", 00:17:28.907 "iscsi_get_auth_groups", 00:17:28.907 "iscsi_auth_group_remove_secret", 00:17:28.907 "iscsi_auth_group_add_secret", 00:17:28.907 "iscsi_delete_auth_group", 00:17:28.907 "iscsi_create_auth_group", 00:17:28.907 "iscsi_set_discovery_auth", 00:17:28.907 "iscsi_get_options", 00:17:28.907 "iscsi_target_node_request_logout", 00:17:28.907 "iscsi_target_node_set_redirect", 00:17:28.907 "iscsi_target_node_set_auth", 00:17:28.907 "iscsi_target_node_add_lun", 00:17:28.907 "iscsi_get_stats", 00:17:28.907 "iscsi_get_connections", 00:17:28.907 "iscsi_portal_group_set_auth", 00:17:28.907 "iscsi_start_portal_group", 00:17:28.907 "iscsi_delete_portal_group", 00:17:28.907 "iscsi_create_portal_group", 00:17:28.907 "iscsi_get_portal_groups", 00:17:28.907 "iscsi_delete_target_node", 00:17:28.907 "iscsi_target_node_remove_pg_ig_maps", 00:17:28.907 "iscsi_target_node_add_pg_ig_maps", 00:17:28.907 "iscsi_create_target_node", 00:17:28.907 "iscsi_get_target_nodes", 00:17:28.907 "iscsi_delete_initiator_group", 00:17:28.907 "iscsi_initiator_group_remove_initiators", 00:17:28.907 "iscsi_initiator_group_add_initiators", 00:17:28.907 "iscsi_create_initiator_group", 00:17:28.907 "iscsi_get_initiator_groups", 00:17:28.907 "nvmf_set_crdt", 00:17:28.907 "nvmf_set_config", 00:17:28.907 "nvmf_set_max_subsystems", 00:17:28.907 "nvmf_stop_mdns_prr", 00:17:28.907 "nvmf_publish_mdns_prr", 00:17:28.907 "nvmf_subsystem_get_listeners", 00:17:28.907 "nvmf_subsystem_get_qpairs", 00:17:28.907 "nvmf_subsystem_get_controllers", 00:17:28.907 "nvmf_get_stats", 00:17:28.907 "nvmf_get_transports", 00:17:28.907 "nvmf_create_transport", 00:17:28.908 "nvmf_get_targets", 00:17:28.908 "nvmf_delete_target", 00:17:28.908 "nvmf_create_target", 00:17:28.908 "nvmf_subsystem_allow_any_host", 00:17:28.908 "nvmf_subsystem_set_keys", 00:17:28.908 "nvmf_subsystem_remove_host", 00:17:28.908 "nvmf_subsystem_add_host", 00:17:28.908 "nvmf_ns_remove_host", 00:17:28.908 "nvmf_ns_add_host", 00:17:28.908 "nvmf_subsystem_remove_ns", 00:17:28.908 "nvmf_subsystem_set_ns_ana_group", 00:17:28.908 "nvmf_subsystem_add_ns", 00:17:28.908 "nvmf_subsystem_listener_set_ana_state", 00:17:28.908 "nvmf_discovery_get_referrals", 00:17:28.908 "nvmf_discovery_remove_referral", 00:17:28.908 "nvmf_discovery_add_referral", 00:17:28.908 "nvmf_subsystem_remove_listener", 00:17:28.908 "nvmf_subsystem_add_listener", 00:17:28.908 "nvmf_delete_subsystem", 00:17:28.908 "nvmf_create_subsystem", 00:17:28.908 "nvmf_get_subsystems", 00:17:28.908 "env_dpdk_get_mem_stats", 00:17:28.908 "nbd_get_disks", 00:17:28.908 "nbd_stop_disk", 00:17:28.908 "nbd_start_disk", 00:17:28.908 "ublk_recover_disk", 00:17:28.908 "ublk_get_disks", 00:17:28.908 "ublk_stop_disk", 00:17:28.908 "ublk_start_disk", 00:17:28.908 "ublk_destroy_target", 00:17:28.908 "ublk_create_target", 00:17:28.908 "virtio_blk_create_transport", 00:17:28.908 "virtio_blk_get_transports", 00:17:28.908 "vhost_controller_set_coalescing", 00:17:28.908 "vhost_get_controllers", 00:17:28.908 "vhost_delete_controller", 00:17:28.908 "vhost_create_blk_controller", 00:17:28.908 "vhost_scsi_controller_remove_target", 00:17:28.908 "vhost_scsi_controller_add_target", 00:17:28.908 "vhost_start_scsi_controller", 00:17:28.908 "vhost_create_scsi_controller", 00:17:28.908 "thread_set_cpumask", 00:17:28.908 "scheduler_set_options", 00:17:28.908 "framework_get_governor", 00:17:28.908 "framework_get_scheduler", 00:17:28.908 "framework_set_scheduler", 00:17:28.908 "framework_get_reactors", 00:17:28.908 "thread_get_io_channels", 00:17:28.908 "thread_get_pollers", 00:17:28.908 "thread_get_stats", 00:17:28.908 "framework_monitor_context_switch", 00:17:28.908 "spdk_kill_instance", 00:17:28.908 "log_enable_timestamps", 00:17:28.908 "log_get_flags", 00:17:28.908 "log_clear_flag", 00:17:28.908 "log_set_flag", 00:17:28.908 "log_get_level", 00:17:28.908 "log_set_level", 00:17:28.908 "log_get_print_level", 00:17:28.908 "log_set_print_level", 00:17:28.908 "framework_enable_cpumask_locks", 00:17:28.908 "framework_disable_cpumask_locks", 00:17:28.908 "framework_wait_init", 00:17:28.908 "framework_start_init", 00:17:28.908 "scsi_get_devices", 00:17:28.908 "bdev_get_histogram", 00:17:28.908 "bdev_enable_histogram", 00:17:28.908 "bdev_set_qos_limit", 00:17:28.908 "bdev_set_qd_sampling_period", 00:17:28.908 "bdev_get_bdevs", 00:17:28.908 "bdev_reset_iostat", 00:17:28.908 "bdev_get_iostat", 00:17:28.908 "bdev_examine", 00:17:28.908 "bdev_wait_for_examine", 00:17:28.908 "bdev_set_options", 00:17:28.908 "accel_get_stats", 00:17:28.908 "accel_set_options", 00:17:28.908 "accel_set_driver", 00:17:28.908 "accel_crypto_key_destroy", 00:17:28.908 "accel_crypto_keys_get", 00:17:28.908 "accel_crypto_key_create", 00:17:28.908 "accel_assign_opc", 00:17:28.908 "accel_get_module_info", 00:17:28.908 "accel_get_opc_assignments", 00:17:28.908 "vmd_rescan", 00:17:28.908 "vmd_remove_device", 00:17:28.908 "vmd_enable", 00:17:28.908 "sock_get_default_impl", 00:17:28.908 "sock_set_default_impl", 00:17:28.908 "sock_impl_set_options", 00:17:28.908 "sock_impl_get_options", 00:17:28.908 "iobuf_get_stats", 00:17:28.908 "iobuf_set_options", 00:17:28.908 "keyring_get_keys", 00:17:28.908 "framework_get_pci_devices", 00:17:28.908 "framework_get_config", 00:17:28.908 "framework_get_subsystems", 00:17:28.908 "fsdev_set_opts", 00:17:28.908 "fsdev_get_opts", 00:17:28.908 "trace_get_info", 00:17:28.908 "trace_get_tpoint_group_mask", 00:17:28.908 "trace_disable_tpoint_group", 00:17:28.908 "trace_enable_tpoint_group", 00:17:28.908 "trace_clear_tpoint_mask", 00:17:28.908 "trace_set_tpoint_mask", 00:17:28.908 "notify_get_notifications", 00:17:28.908 "notify_get_types", 00:17:28.908 "spdk_get_version", 00:17:28.908 "rpc_get_methods" 00:17:28.908 ] 00:17:28.908 07:14:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:17:28.908 07:14:52 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.908 07:14:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.908 07:14:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:28.908 07:14:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57138 00:17:28.908 07:14:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57138 ']' 00:17:28.908 07:14:52 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57138 00:17:28.908 07:14:52 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:17:28.908 07:14:52 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.908 07:14:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57138 00:17:28.908 07:14:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.908 07:14:52 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.908 killing process with pid 57138 00:17:28.908 07:14:52 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57138' 00:17:28.908 07:14:52 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57138 00:17:28.908 07:14:52 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57138 00:17:29.167 00:17:29.167 real 0m1.469s 00:17:29.167 user 0m2.688s 00:17:29.167 sys 0m0.333s 00:17:29.167 07:14:53 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.167 ************************************ 00:17:29.167 END TEST spdkcli_tcp 00:17:29.167 ************************************ 00:17:29.167 07:14:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:29.167 07:14:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:17:29.167 07:14:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:29.167 07:14:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.167 07:14:53 -- common/autotest_common.sh@10 -- # set +x 00:17:29.167 ************************************ 00:17:29.167 START TEST dpdk_mem_utility 00:17:29.167 ************************************ 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:17:29.167 * Looking for test storage... 00:17:29.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.167 07:14:53 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:29.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.167 --rc genhtml_branch_coverage=1 00:17:29.167 --rc genhtml_function_coverage=1 00:17:29.167 --rc genhtml_legend=1 00:17:29.167 --rc geninfo_all_blocks=1 00:17:29.167 --rc geninfo_unexecuted_blocks=1 00:17:29.167 00:17:29.167 ' 00:17:29.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:29.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.167 --rc genhtml_branch_coverage=1 00:17:29.167 --rc genhtml_function_coverage=1 00:17:29.167 --rc genhtml_legend=1 00:17:29.167 --rc geninfo_all_blocks=1 00:17:29.167 --rc geninfo_unexecuted_blocks=1 00:17:29.167 00:17:29.167 ' 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:29.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.167 --rc genhtml_branch_coverage=1 00:17:29.167 --rc genhtml_function_coverage=1 00:17:29.167 --rc genhtml_legend=1 00:17:29.167 --rc geninfo_all_blocks=1 00:17:29.167 --rc geninfo_unexecuted_blocks=1 00:17:29.167 00:17:29.167 ' 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:29.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.167 --rc genhtml_branch_coverage=1 00:17:29.167 --rc genhtml_function_coverage=1 00:17:29.167 --rc genhtml_legend=1 00:17:29.167 --rc geninfo_all_blocks=1 00:17:29.167 --rc geninfo_unexecuted_blocks=1 00:17:29.167 00:17:29.167 ' 00:17:29.167 07:14:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:17:29.167 07:14:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57232 00:17:29.167 07:14:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57232 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57232 ']' 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.167 07:14:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:29.167 07:14:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.167 [2024-11-20 07:14:53.339873] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:29.167 [2024-11-20 07:14:53.339933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57232 ] 00:17:29.425 [2024-11-20 07:14:53.478116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.425 [2024-11-20 07:14:53.513568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.425 [2024-11-20 07:14:53.557044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.361 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.361 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:17:30.361 07:14:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:17:30.361 07:14:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:17:30.361 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.361 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:30.361 { 00:17:30.361 "filename": "/tmp/spdk_mem_dump.txt" 00:17:30.361 } 00:17:30.361 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.361 07:14:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:17:30.361 DPDK memory size 810.000000 MiB in 1 heap(s) 00:17:30.361 1 heaps totaling size 810.000000 MiB 00:17:30.361 size: 810.000000 MiB heap id: 0 00:17:30.361 end heaps---------- 00:17:30.361 9 mempools totaling size 595.772034 MiB 00:17:30.361 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:17:30.361 size: 158.602051 MiB name: PDU_data_out_Pool 00:17:30.361 size: 92.545471 MiB name: bdev_io_57232 00:17:30.361 size: 50.003479 MiB name: msgpool_57232 00:17:30.361 size: 36.509338 MiB name: fsdev_io_57232 00:17:30.361 size: 21.763794 MiB name: PDU_Pool 00:17:30.361 size: 19.513306 MiB name: SCSI_TASK_Pool 00:17:30.361 size: 4.133484 MiB name: evtpool_57232 00:17:30.361 size: 0.026123 MiB name: Session_Pool 00:17:30.361 end mempools------- 00:17:30.361 6 memzones totaling size 4.142822 MiB 00:17:30.361 size: 1.000366 MiB name: RG_ring_0_57232 00:17:30.361 size: 1.000366 MiB name: RG_ring_1_57232 00:17:30.361 size: 1.000366 MiB name: RG_ring_4_57232 00:17:30.361 size: 1.000366 MiB name: RG_ring_5_57232 00:17:30.361 size: 0.125366 MiB name: RG_ring_2_57232 00:17:30.361 size: 0.015991 MiB name: RG_ring_3_57232 00:17:30.361 end memzones------- 00:17:30.361 07:14:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:17:30.361 heap id: 0 total size: 810.000000 MiB number of busy elements: 326 number of free elements: 15 00:17:30.361 list of free elements. size: 10.810852 MiB 00:17:30.361 element at address: 0x200018a00000 with size: 0.999878 MiB 00:17:30.361 element at address: 0x200018c00000 with size: 0.999878 MiB 00:17:30.361 element at address: 0x200031800000 with size: 0.994446 MiB 00:17:30.361 element at address: 0x200000400000 with size: 0.993958 MiB 00:17:30.361 element at address: 0x200006400000 with size: 0.959839 MiB 00:17:30.361 element at address: 0x200012c00000 with size: 0.954285 MiB 00:17:30.361 element at address: 0x200018e00000 with size: 0.936584 MiB 00:17:30.361 element at address: 0x200000200000 with size: 0.717346 MiB 00:17:30.361 element at address: 0x20001a600000 with size: 0.565308 MiB 00:17:30.361 element at address: 0x20000a600000 with size: 0.488892 MiB 00:17:30.361 element at address: 0x200000c00000 with size: 0.487000 MiB 00:17:30.361 element at address: 0x200019000000 with size: 0.485657 MiB 00:17:30.361 element at address: 0x200003e00000 with size: 0.480286 MiB 00:17:30.362 element at address: 0x200027a00000 with size: 0.395752 MiB 00:17:30.362 element at address: 0x200000800000 with size: 0.351746 MiB 00:17:30.362 list of standard malloc elements. size: 199.270264 MiB 00:17:30.362 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:17:30.362 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:17:30.362 element at address: 0x200018afff80 with size: 1.000122 MiB 00:17:30.362 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:17:30.362 element at address: 0x200018efff80 with size: 1.000122 MiB 00:17:30.362 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:17:30.362 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:17:30.362 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:17:30.362 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:17:30.362 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000085e580 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087e840 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087e900 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087f080 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087f140 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087f200 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087f380 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087f440 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087f500 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000087f680 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000cff000 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x200003efb980 with size: 0.000183 MiB 00:17:30.362 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:17:30.362 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:17:30.363 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:17:30.363 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:17:30.363 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a690b80 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a690c40 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a690d00 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a690dc0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a690e80 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a690f40 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691000 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6910c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691180 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691240 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691300 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691480 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691540 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691600 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691780 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691840 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691900 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692080 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692140 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692200 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692380 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692440 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692500 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692680 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692740 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692800 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692980 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693040 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693100 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693280 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693340 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693400 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693580 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693640 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693700 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693880 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693940 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694000 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694180 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694240 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694300 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694480 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694540 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694600 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694780 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694840 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694900 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a695080 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a695140 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a695200 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a695380 with size: 0.000183 MiB 00:17:30.363 element at address: 0x20001a695440 with size: 0.000183 MiB 00:17:30.363 element at address: 0x200027a65500 with size: 0.000183 MiB 00:17:30.363 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:17:30.363 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:17:30.363 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:17:30.363 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:17:30.363 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:17:30.364 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:17:30.364 list of memzone associated elements. size: 599.918884 MiB 00:17:30.364 element at address: 0x20001a695500 with size: 211.416748 MiB 00:17:30.364 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:17:30.364 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:17:30.364 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:17:30.364 element at address: 0x200012df4780 with size: 92.045044 MiB 00:17:30.364 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57232_0 00:17:30.364 element at address: 0x200000dff380 with size: 48.003052 MiB 00:17:30.364 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57232_0 00:17:30.364 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:17:30.364 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57232_0 00:17:30.364 element at address: 0x2000191be940 with size: 20.255554 MiB 00:17:30.364 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:17:30.364 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:17:30.364 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:17:30.364 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:17:30.364 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57232_0 00:17:30.364 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:17:30.364 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57232 00:17:30.364 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:17:30.364 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57232 00:17:30.364 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:17:30.364 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:17:30.364 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:17:30.364 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:17:30.364 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:17:30.364 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:17:30.364 element at address: 0x200003efba40 with size: 1.008118 MiB 00:17:30.364 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:17:30.364 element at address: 0x200000cff180 with size: 1.000488 MiB 00:17:30.364 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57232 00:17:30.364 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:17:30.364 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57232 00:17:30.364 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:17:30.364 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57232 00:17:30.364 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:17:30.364 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57232 00:17:30.364 element at address: 0x20000087f740 with size: 0.500488 MiB 00:17:30.364 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57232 00:17:30.364 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:17:30.364 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57232 00:17:30.364 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:17:30.364 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:17:30.364 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:17:30.364 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:17:30.364 element at address: 0x20001907c540 with size: 0.250488 MiB 00:17:30.364 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:17:30.364 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:17:30.364 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57232 00:17:30.364 element at address: 0x20000085e640 with size: 0.125488 MiB 00:17:30.364 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57232 00:17:30.364 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:17:30.364 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:17:30.364 element at address: 0x200027a65680 with size: 0.023743 MiB 00:17:30.364 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:17:30.364 element at address: 0x20000085a380 with size: 0.016113 MiB 00:17:30.364 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57232 00:17:30.364 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:17:30.364 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:17:30.364 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:17:30.364 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57232 00:17:30.364 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:17:30.364 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57232 00:17:30.364 element at address: 0x20000085a180 with size: 0.000305 MiB 00:17:30.364 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57232 00:17:30.364 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:17:30.364 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:17:30.364 07:14:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:17:30.364 07:14:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57232 00:17:30.364 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57232 ']' 00:17:30.364 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57232 00:17:30.364 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:17:30.365 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.365 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57232 00:17:30.365 killing process with pid 57232 00:17:30.365 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.365 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.365 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57232' 00:17:30.365 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57232 00:17:30.365 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57232 00:17:30.365 00:17:30.365 real 0m1.372s 00:17:30.365 user 0m1.524s 00:17:30.365 sys 0m0.273s 00:17:30.365 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.365 07:14:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:30.365 ************************************ 00:17:30.365 END TEST dpdk_mem_utility 00:17:30.365 ************************************ 00:17:30.624 07:14:54 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:17:30.624 07:14:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:30.624 07:14:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.624 07:14:54 -- common/autotest_common.sh@10 -- # set +x 00:17:30.624 ************************************ 00:17:30.624 START TEST event 00:17:30.624 ************************************ 00:17:30.624 07:14:54 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:17:30.624 * Looking for test storage... 00:17:30.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:17:30.624 07:14:54 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:30.624 07:14:54 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:30.624 07:14:54 event -- common/autotest_common.sh@1693 -- # lcov --version 00:17:30.624 07:14:54 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:30.624 07:14:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.624 07:14:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.624 07:14:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.624 07:14:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.624 07:14:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.624 07:14:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.624 07:14:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.624 07:14:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.624 07:14:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.624 07:14:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.624 07:14:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.624 07:14:54 event -- scripts/common.sh@344 -- # case "$op" in 00:17:30.624 07:14:54 event -- scripts/common.sh@345 -- # : 1 00:17:30.624 07:14:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.624 07:14:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.624 07:14:54 event -- scripts/common.sh@365 -- # decimal 1 00:17:30.624 07:14:54 event -- scripts/common.sh@353 -- # local d=1 00:17:30.624 07:14:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.624 07:14:54 event -- scripts/common.sh@355 -- # echo 1 00:17:30.624 07:14:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.624 07:14:54 event -- scripts/common.sh@366 -- # decimal 2 00:17:30.624 07:14:54 event -- scripts/common.sh@353 -- # local d=2 00:17:30.624 07:14:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.624 07:14:54 event -- scripts/common.sh@355 -- # echo 2 00:17:30.624 07:14:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.624 07:14:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.624 07:14:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.624 07:14:54 event -- scripts/common.sh@368 -- # return 0 00:17:30.624 07:14:54 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.624 07:14:54 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.624 --rc genhtml_branch_coverage=1 00:17:30.624 --rc genhtml_function_coverage=1 00:17:30.624 --rc genhtml_legend=1 00:17:30.624 --rc geninfo_all_blocks=1 00:17:30.624 --rc geninfo_unexecuted_blocks=1 00:17:30.624 00:17:30.624 ' 00:17:30.624 07:14:54 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.624 --rc genhtml_branch_coverage=1 00:17:30.624 --rc genhtml_function_coverage=1 00:17:30.624 --rc genhtml_legend=1 00:17:30.624 --rc geninfo_all_blocks=1 00:17:30.624 --rc geninfo_unexecuted_blocks=1 00:17:30.624 00:17:30.624 ' 00:17:30.624 07:14:54 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.624 --rc genhtml_branch_coverage=1 00:17:30.624 --rc genhtml_function_coverage=1 00:17:30.624 --rc genhtml_legend=1 00:17:30.624 --rc geninfo_all_blocks=1 00:17:30.624 --rc geninfo_unexecuted_blocks=1 00:17:30.624 00:17:30.624 ' 00:17:30.624 07:14:54 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.624 --rc genhtml_branch_coverage=1 00:17:30.624 --rc genhtml_function_coverage=1 00:17:30.624 --rc genhtml_legend=1 00:17:30.624 --rc geninfo_all_blocks=1 00:17:30.624 --rc geninfo_unexecuted_blocks=1 00:17:30.624 00:17:30.624 ' 00:17:30.624 07:14:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:30.624 07:14:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:17:30.624 07:14:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:17:30.624 07:14:54 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:17:30.624 07:14:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.624 07:14:54 event -- common/autotest_common.sh@10 -- # set +x 00:17:30.624 ************************************ 00:17:30.624 START TEST event_perf 00:17:30.624 ************************************ 00:17:30.624 07:14:54 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:17:30.624 Running I/O for 1 seconds...[2024-11-20 07:14:54.730311] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:30.624 [2024-11-20 07:14:54.730371] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57311 ] 00:17:30.883 [2024-11-20 07:14:54.874205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:30.883 [2024-11-20 07:14:54.913914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.883 [2024-11-20 07:14:54.913986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.883 [2024-11-20 07:14:54.914233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.883 [2024-11-20 07:14:54.914250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.816 Running I/O for 1 seconds... 00:17:31.816 lcore 0: 170591 00:17:31.816 lcore 1: 170593 00:17:31.816 lcore 2: 170595 00:17:31.816 lcore 3: 170594 00:17:31.816 done. 00:17:31.816 00:17:31.816 real 0m1.229s 00:17:31.816 user 0m4.076s 00:17:31.816 sys 0m0.031s 00:17:31.816 07:14:55 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.816 07:14:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:17:31.816 ************************************ 00:17:31.816 END TEST event_perf 00:17:31.816 ************************************ 00:17:31.816 07:14:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:17:31.816 07:14:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:31.816 07:14:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.816 07:14:55 event -- common/autotest_common.sh@10 -- # set +x 00:17:31.816 ************************************ 00:17:31.816 START TEST event_reactor 00:17:31.816 ************************************ 00:17:31.816 07:14:55 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:17:31.816 [2024-11-20 07:14:56.000481] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:31.816 [2024-11-20 07:14:56.000543] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57344 ] 00:17:32.073 [2024-11-20 07:14:56.138371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.073 [2024-11-20 07:14:56.174632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.006 test_start 00:17:33.007 oneshot 00:17:33.007 tick 100 00:17:33.007 tick 100 00:17:33.007 tick 250 00:17:33.007 tick 100 00:17:33.007 tick 100 00:17:33.007 tick 250 00:17:33.007 tick 100 00:17:33.007 tick 500 00:17:33.007 tick 100 00:17:33.007 tick 100 00:17:33.007 tick 250 00:17:33.007 tick 100 00:17:33.007 tick 100 00:17:33.007 test_end 00:17:33.007 00:17:33.007 real 0m1.221s 00:17:33.007 user 0m1.086s 00:17:33.007 sys 0m0.029s 00:17:33.007 07:14:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.007 07:14:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:17:33.007 ************************************ 00:17:33.007 END TEST event_reactor 00:17:33.007 ************************************ 00:17:33.266 07:14:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:17:33.266 07:14:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:33.266 07:14:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.266 07:14:57 event -- common/autotest_common.sh@10 -- # set +x 00:17:33.266 ************************************ 00:17:33.266 START TEST event_reactor_perf 00:17:33.266 ************************************ 00:17:33.266 07:14:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:17:33.266 [2024-11-20 07:14:57.257526] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:33.266 [2024-11-20 07:14:57.257591] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57380 ] 00:17:33.266 [2024-11-20 07:14:57.394817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.266 [2024-11-20 07:14:57.430305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.642 test_start 00:17:34.642 test_end 00:17:34.642 Performance: 386493 events per second 00:17:34.642 00:17:34.642 real 0m1.215s 00:17:34.642 user 0m1.084s 00:17:34.642 sys 0m0.025s 00:17:34.642 07:14:58 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.642 07:14:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:17:34.642 ************************************ 00:17:34.642 END TEST event_reactor_perf 00:17:34.642 ************************************ 00:17:34.642 07:14:58 event -- event/event.sh@49 -- # uname -s 00:17:34.642 07:14:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:17:34.642 07:14:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:17:34.642 07:14:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:34.642 07:14:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.642 07:14:58 event -- common/autotest_common.sh@10 -- # set +x 00:17:34.642 ************************************ 00:17:34.642 START TEST event_scheduler 00:17:34.642 ************************************ 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:17:34.642 * Looking for test storage... 00:17:34.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.642 07:14:58 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:34.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.642 --rc genhtml_branch_coverage=1 00:17:34.642 --rc genhtml_function_coverage=1 00:17:34.642 --rc genhtml_legend=1 00:17:34.642 --rc geninfo_all_blocks=1 00:17:34.642 --rc geninfo_unexecuted_blocks=1 00:17:34.642 00:17:34.642 ' 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:34.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.642 --rc genhtml_branch_coverage=1 00:17:34.642 --rc genhtml_function_coverage=1 00:17:34.642 --rc genhtml_legend=1 00:17:34.642 --rc geninfo_all_blocks=1 00:17:34.642 --rc geninfo_unexecuted_blocks=1 00:17:34.642 00:17:34.642 ' 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:34.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.642 --rc genhtml_branch_coverage=1 00:17:34.642 --rc genhtml_function_coverage=1 00:17:34.642 --rc genhtml_legend=1 00:17:34.642 --rc geninfo_all_blocks=1 00:17:34.642 --rc geninfo_unexecuted_blocks=1 00:17:34.642 00:17:34.642 ' 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:34.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.642 --rc genhtml_branch_coverage=1 00:17:34.642 --rc genhtml_function_coverage=1 00:17:34.642 --rc genhtml_legend=1 00:17:34.642 --rc geninfo_all_blocks=1 00:17:34.642 --rc geninfo_unexecuted_blocks=1 00:17:34.642 00:17:34.642 ' 00:17:34.642 07:14:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:17:34.642 07:14:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57449 00:17:34.642 07:14:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:17:34.642 07:14:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57449 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 57449 ']' 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.642 07:14:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:17:34.642 07:14:58 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.643 07:14:58 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.643 07:14:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:34.643 [2024-11-20 07:14:58.671682] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:34.643 [2024-11-20 07:14:58.671767] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57449 ] 00:17:34.643 [2024-11-20 07:14:58.815572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.900 [2024-11-20 07:14:58.867435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.900 [2024-11-20 07:14:58.867676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.900 [2024-11-20 07:14:58.867876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.900 [2024-11-20 07:14:58.867880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.476 07:14:59 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.476 07:14:59 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:17:35.476 07:14:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:17:35.476 07:14:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.476 07:14:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:35.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:35.476 POWER: Cannot set governor of lcore 0 to userspace 00:17:35.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:35.476 POWER: Cannot set governor of lcore 0 to performance 00:17:35.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:35.476 POWER: Cannot set governor of lcore 0 to userspace 00:17:35.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:35.476 POWER: Cannot set governor of lcore 0 to userspace 00:17:35.476 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:17:35.476 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:17:35.476 POWER: Unable to set Power Management Environment for lcore 0 00:17:35.476 [2024-11-20 07:14:59.573299] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:17:35.476 [2024-11-20 07:14:59.573313] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:17:35.476 [2024-11-20 07:14:59.573320] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:17:35.476 [2024-11-20 07:14:59.573332] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:17:35.476 [2024-11-20 07:14:59.573339] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:17:35.476 [2024-11-20 07:14:59.573345] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:17:35.476 07:14:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.476 07:14:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:17:35.476 07:14:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.476 07:14:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:35.476 [2024-11-20 07:14:59.609832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:35.476 [2024-11-20 07:14:59.631611] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:17:35.476 07:14:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.476 07:14:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:17:35.476 07:14:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:35.476 07:14:59 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.476 07:14:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:35.476 ************************************ 00:17:35.476 START TEST scheduler_create_thread 00:17:35.476 ************************************ 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:35.476 2 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:35.476 3 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:35.476 4 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.476 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:35.476 5 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:35.734 6 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:35.734 7 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:35.734 8 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:35.734 9 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:35.734 10 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.734 07:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:36.300 07:15:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.300 00:17:36.300 real 0m0.589s 00:17:36.300 user 0m0.011s 00:17:36.300 sys 0m0.005s 00:17:36.300 07:15:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.300 ************************************ 00:17:36.300 END TEST scheduler_create_thread 00:17:36.300 ************************************ 00:17:36.300 07:15:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:36.300 07:15:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:17:36.300 07:15:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57449 00:17:36.300 07:15:00 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 57449 ']' 00:17:36.300 07:15:00 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 57449 00:17:36.300 07:15:00 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:17:36.300 07:15:00 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.300 07:15:00 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57449 00:17:36.300 07:15:00 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:36.300 07:15:00 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:36.300 killing process with pid 57449 00:17:36.300 07:15:00 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57449' 00:17:36.300 07:15:00 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 57449 00:17:36.300 07:15:00 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 57449 00:17:36.558 [2024-11-20 07:15:00.709055] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:17:36.816 00:17:36.816 real 0m2.307s 00:17:36.816 user 0m4.722s 00:17:36.816 sys 0m0.250s 00:17:36.816 07:15:00 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.816 ************************************ 00:17:36.816 END TEST event_scheduler 00:17:36.816 ************************************ 00:17:36.816 07:15:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:36.816 07:15:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:17:36.816 07:15:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:17:36.816 07:15:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:36.816 07:15:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.816 07:15:00 event -- common/autotest_common.sh@10 -- # set +x 00:17:36.816 ************************************ 00:17:36.816 START TEST app_repeat 00:17:36.816 ************************************ 00:17:36.816 07:15:00 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57521 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:17:36.816 Process app_repeat pid: 57521 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57521' 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:17:36.816 spdk_app_start Round 0 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57521 /var/tmp/spdk-nbd.sock 00:17:36.816 07:15:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57521 ']' 00:17:36.816 07:15:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:36.816 07:15:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:36.816 07:15:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:36.816 07:15:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.816 07:15:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:36.816 07:15:00 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:17:36.816 [2024-11-20 07:15:00.878680] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:36.816 [2024-11-20 07:15:00.879008] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57521 ] 00:17:36.816 [2024-11-20 07:15:01.015437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:37.073 [2024-11-20 07:15:01.052757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.073 [2024-11-20 07:15:01.052768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.073 [2024-11-20 07:15:01.082906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:37.640 07:15:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.640 07:15:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:17:37.640 07:15:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:37.898 Malloc0 00:17:37.898 07:15:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:38.156 Malloc1 00:17:38.156 07:15:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:38.156 07:15:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:38.415 /dev/nbd0 00:17:38.415 07:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:38.415 07:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:38.415 1+0 records in 00:17:38.415 1+0 records out 00:17:38.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241088 s, 17.0 MB/s 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:38.415 07:15:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:17:38.415 07:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:38.415 07:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:38.415 07:15:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:38.673 /dev/nbd1 00:17:38.673 07:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:38.673 07:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:38.673 1+0 records in 00:17:38.673 1+0 records out 00:17:38.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227379 s, 18.0 MB/s 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:38.673 07:15:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:17:38.673 07:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:38.673 07:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:38.673 07:15:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:38.673 07:15:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:38.673 07:15:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:38.933 { 00:17:38.933 "nbd_device": "/dev/nbd0", 00:17:38.933 "bdev_name": "Malloc0" 00:17:38.933 }, 00:17:38.933 { 00:17:38.933 "nbd_device": "/dev/nbd1", 00:17:38.933 "bdev_name": "Malloc1" 00:17:38.933 } 00:17:38.933 ]' 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:38.933 { 00:17:38.933 "nbd_device": "/dev/nbd0", 00:17:38.933 "bdev_name": "Malloc0" 00:17:38.933 }, 00:17:38.933 { 00:17:38.933 "nbd_device": "/dev/nbd1", 00:17:38.933 "bdev_name": "Malloc1" 00:17:38.933 } 00:17:38.933 ]' 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:38.933 /dev/nbd1' 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:38.933 /dev/nbd1' 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:38.933 256+0 records in 00:17:38.933 256+0 records out 00:17:38.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00922339 s, 114 MB/s 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:38.933 256+0 records in 00:17:38.933 256+0 records out 00:17:38.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017245 s, 60.8 MB/s 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:38.933 07:15:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:38.933 256+0 records in 00:17:38.933 256+0 records out 00:17:38.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190505 s, 55.0 MB/s 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.933 07:15:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:39.192 07:15:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:39.192 07:15:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:39.192 07:15:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:39.192 07:15:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:39.192 07:15:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:39.192 07:15:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:39.192 07:15:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:39.192 07:15:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:39.192 07:15:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:39.192 07:15:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:39.450 07:15:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:39.451 07:15:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:39.451 07:15:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:39.451 07:15:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:39.451 07:15:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:39.451 07:15:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:39.451 07:15:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:39.451 07:15:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:39.451 07:15:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:39.451 07:15:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:39.451 07:15:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:39.709 07:15:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:39.709 07:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:39.709 07:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:39.709 07:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:39.709 07:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:17:39.709 07:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:39.709 07:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:17:39.709 07:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:17:39.709 07:15:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:17:39.709 07:15:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:17:39.709 07:15:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:39.709 07:15:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:17:39.709 07:15:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:39.967 07:15:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:17:39.967 [2024-11-20 07:15:03.988874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:39.967 [2024-11-20 07:15:04.023329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.967 [2024-11-20 07:15:04.023335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.967 [2024-11-20 07:15:04.053800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:39.967 [2024-11-20 07:15:04.053859] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:39.967 [2024-11-20 07:15:04.053868] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:43.247 07:15:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:17:43.247 spdk_app_start Round 1 00:17:43.247 07:15:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:17:43.247 07:15:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57521 /var/tmp/spdk-nbd.sock 00:17:43.247 07:15:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57521 ']' 00:17:43.247 07:15:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:43.247 07:15:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:43.247 07:15:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:43.247 07:15:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.247 07:15:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:43.247 07:15:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.247 07:15:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:17:43.247 07:15:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:43.247 Malloc0 00:17:43.247 07:15:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:43.505 Malloc1 00:17:43.505 07:15:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.505 07:15:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:43.762 /dev/nbd0 00:17:43.762 07:15:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:43.762 07:15:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:43.762 1+0 records in 00:17:43.762 1+0 records out 00:17:43.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000148828 s, 27.5 MB/s 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:17:43.762 07:15:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.762 07:15:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.762 07:15:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:43.762 /dev/nbd1 00:17:43.762 07:15:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:43.762 07:15:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:43.762 07:15:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:44.020 07:15:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:17:44.020 07:15:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:44.020 07:15:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:44.020 07:15:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:44.020 1+0 records in 00:17:44.020 1+0 records out 00:17:44.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222967 s, 18.4 MB/s 00:17:44.020 07:15:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:44.020 07:15:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:17:44.020 07:15:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:44.020 07:15:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:44.020 07:15:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:17:44.020 07:15:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:44.020 07:15:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.020 07:15:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:44.020 07:15:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:44.020 07:15:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:44.020 { 00:17:44.020 "nbd_device": "/dev/nbd0", 00:17:44.020 "bdev_name": "Malloc0" 00:17:44.020 }, 00:17:44.020 { 00:17:44.020 "nbd_device": "/dev/nbd1", 00:17:44.020 "bdev_name": "Malloc1" 00:17:44.020 } 00:17:44.020 ]' 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:44.020 { 00:17:44.020 "nbd_device": "/dev/nbd0", 00:17:44.020 "bdev_name": "Malloc0" 00:17:44.020 }, 00:17:44.020 { 00:17:44.020 "nbd_device": "/dev/nbd1", 00:17:44.020 "bdev_name": "Malloc1" 00:17:44.020 } 00:17:44.020 ]' 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:44.020 /dev/nbd1' 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:44.020 /dev/nbd1' 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:44.020 07:15:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:44.021 07:15:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:44.021 07:15:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:44.021 07:15:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:44.277 256+0 records in 00:17:44.277 256+0 records out 00:17:44.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00910667 s, 115 MB/s 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:44.277 256+0 records in 00:17:44.277 256+0 records out 00:17:44.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184388 s, 56.9 MB/s 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:44.277 256+0 records in 00:17:44.277 256+0 records out 00:17:44.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193349 s, 54.2 MB/s 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:44.277 07:15:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:44.535 07:15:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:44.793 07:15:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:44.793 07:15:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:44.793 07:15:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:44.793 07:15:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:44.793 07:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:17:44.793 07:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:44.793 07:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:17:44.793 07:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:17:44.793 07:15:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:17:44.793 07:15:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:17:44.793 07:15:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:44.793 07:15:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:17:44.793 07:15:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:45.050 07:15:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:17:45.050 [2024-11-20 07:15:09.249816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:45.307 [2024-11-20 07:15:09.284219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.307 [2024-11-20 07:15:09.284239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.307 [2024-11-20 07:15:09.315214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:45.307 [2024-11-20 07:15:09.315283] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:45.307 [2024-11-20 07:15:09.315291] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:48.664 07:15:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:17:48.664 spdk_app_start Round 2 00:17:48.664 07:15:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:17:48.664 07:15:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57521 /var/tmp/spdk-nbd.sock 00:17:48.664 07:15:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57521 ']' 00:17:48.664 07:15:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:48.664 07:15:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:48.664 07:15:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:48.664 07:15:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.664 07:15:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:48.664 07:15:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.664 07:15:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:17:48.664 07:15:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:48.664 Malloc0 00:17:48.664 07:15:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:48.664 Malloc1 00:17:48.664 07:15:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:48.664 07:15:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:48.922 /dev/nbd0 00:17:48.922 07:15:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:48.922 07:15:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:48.922 07:15:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:48.923 1+0 records in 00:17:48.923 1+0 records out 00:17:48.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000112054 s, 36.6 MB/s 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:48.923 07:15:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:17:48.923 07:15:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.923 07:15:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:48.923 07:15:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:49.180 /dev/nbd1 00:17:49.180 07:15:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:49.180 07:15:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:49.180 1+0 records in 00:17:49.180 1+0 records out 00:17:49.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144369 s, 28.4 MB/s 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:49.180 07:15:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:17:49.180 07:15:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.180 07:15:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:49.180 07:15:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:49.180 07:15:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:49.180 07:15:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:49.439 { 00:17:49.439 "nbd_device": "/dev/nbd0", 00:17:49.439 "bdev_name": "Malloc0" 00:17:49.439 }, 00:17:49.439 { 00:17:49.439 "nbd_device": "/dev/nbd1", 00:17:49.439 "bdev_name": "Malloc1" 00:17:49.439 } 00:17:49.439 ]' 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:49.439 { 00:17:49.439 "nbd_device": "/dev/nbd0", 00:17:49.439 "bdev_name": "Malloc0" 00:17:49.439 }, 00:17:49.439 { 00:17:49.439 "nbd_device": "/dev/nbd1", 00:17:49.439 "bdev_name": "Malloc1" 00:17:49.439 } 00:17:49.439 ]' 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:49.439 /dev/nbd1' 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:49.439 /dev/nbd1' 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:49.439 256+0 records in 00:17:49.439 256+0 records out 00:17:49.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00887638 s, 118 MB/s 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:49.439 256+0 records in 00:17:49.439 256+0 records out 00:17:49.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014724 s, 71.2 MB/s 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:49.439 256+0 records in 00:17:49.439 256+0 records out 00:17:49.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173671 s, 60.4 MB/s 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.439 07:15:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:49.697 07:15:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:49.697 07:15:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:49.697 07:15:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:49.697 07:15:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.697 07:15:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.697 07:15:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:49.697 07:15:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:49.697 07:15:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.697 07:15:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.697 07:15:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:50.003 07:15:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:50.003 07:15:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:50.003 07:15:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:50.003 07:15:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:50.003 07:15:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:50.003 07:15:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:50.003 07:15:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:50.003 07:15:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:50.003 07:15:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:50.003 07:15:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:50.003 07:15:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:50.003 07:15:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:50.003 07:15:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:50.003 07:15:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:50.264 07:15:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:50.264 07:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:50.264 07:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:17:50.264 07:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:17:50.264 07:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:17:50.264 07:15:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:17:50.264 07:15:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:17:50.264 07:15:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:50.264 07:15:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:17:50.264 07:15:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:50.264 07:15:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:17:50.524 [2024-11-20 07:15:14.497840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:50.525 [2024-11-20 07:15:14.527278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.525 [2024-11-20 07:15:14.527294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.525 [2024-11-20 07:15:14.554744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:50.525 [2024-11-20 07:15:14.554795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:50.525 [2024-11-20 07:15:14.554801] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:53.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:53.804 07:15:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57521 /var/tmp/spdk-nbd.sock 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57521 ']' 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:17:53.804 07:15:17 event.app_repeat -- event/event.sh@39 -- # killprocess 57521 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 57521 ']' 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 57521 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57521 00:17:53.804 killing process with pid 57521 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57521' 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@973 -- # kill 57521 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@978 -- # wait 57521 00:17:53.804 spdk_app_start is called in Round 0. 00:17:53.804 Shutdown signal received, stop current app iteration 00:17:53.804 Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 reinitialization... 00:17:53.804 spdk_app_start is called in Round 1. 00:17:53.804 Shutdown signal received, stop current app iteration 00:17:53.804 Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 reinitialization... 00:17:53.804 spdk_app_start is called in Round 2. 00:17:53.804 Shutdown signal received, stop current app iteration 00:17:53.804 Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 reinitialization... 00:17:53.804 spdk_app_start is called in Round 3. 00:17:53.804 Shutdown signal received, stop current app iteration 00:17:53.804 07:15:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:17:53.804 07:15:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:17:53.804 00:17:53.804 real 0m16.907s 00:17:53.804 user 0m37.999s 00:17:53.804 sys 0m1.954s 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.804 ************************************ 00:17:53.804 END TEST app_repeat 00:17:53.804 ************************************ 00:17:53.804 07:15:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:53.804 07:15:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:17:53.804 07:15:17 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:17:53.804 07:15:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.804 07:15:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.804 07:15:17 event -- common/autotest_common.sh@10 -- # set +x 00:17:53.804 ************************************ 00:17:53.804 START TEST cpu_locks 00:17:53.804 ************************************ 00:17:53.804 07:15:17 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:17:53.804 * Looking for test storage... 00:17:53.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:17:53.804 07:15:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:53.804 07:15:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:53.804 07:15:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:17:53.804 07:15:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:53.804 07:15:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.804 07:15:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.804 07:15:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.804 07:15:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.804 07:15:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.804 07:15:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.804 07:15:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.804 07:15:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.804 07:15:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.804 07:15:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.805 07:15:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:17:53.805 07:15:17 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.805 07:15:17 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:53.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.805 --rc genhtml_branch_coverage=1 00:17:53.805 --rc genhtml_function_coverage=1 00:17:53.805 --rc genhtml_legend=1 00:17:53.805 --rc geninfo_all_blocks=1 00:17:53.805 --rc geninfo_unexecuted_blocks=1 00:17:53.805 00:17:53.805 ' 00:17:53.805 07:15:17 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:53.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.805 --rc genhtml_branch_coverage=1 00:17:53.805 --rc genhtml_function_coverage=1 00:17:53.805 --rc genhtml_legend=1 00:17:53.805 --rc geninfo_all_blocks=1 00:17:53.805 --rc geninfo_unexecuted_blocks=1 00:17:53.805 00:17:53.805 ' 00:17:53.805 07:15:17 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:53.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.805 --rc genhtml_branch_coverage=1 00:17:53.805 --rc genhtml_function_coverage=1 00:17:53.805 --rc genhtml_legend=1 00:17:53.805 --rc geninfo_all_blocks=1 00:17:53.805 --rc geninfo_unexecuted_blocks=1 00:17:53.805 00:17:53.805 ' 00:17:53.805 07:15:17 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:53.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.805 --rc genhtml_branch_coverage=1 00:17:53.805 --rc genhtml_function_coverage=1 00:17:53.805 --rc genhtml_legend=1 00:17:53.805 --rc geninfo_all_blocks=1 00:17:53.805 --rc geninfo_unexecuted_blocks=1 00:17:53.805 00:17:53.805 ' 00:17:53.805 07:15:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:17:53.805 07:15:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:17:53.805 07:15:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:17:53.805 07:15:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:17:53.805 07:15:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.805 07:15:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.805 07:15:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:53.805 ************************************ 00:17:53.805 START TEST default_locks 00:17:53.805 ************************************ 00:17:53.805 07:15:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:17:53.805 07:15:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57940 00:17:53.805 07:15:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 57940 00:17:53.805 07:15:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 57940 ']' 00:17:53.805 07:15:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:53.805 07:15:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.805 07:15:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.805 07:15:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.805 07:15:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.805 07:15:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:17:53.805 [2024-11-20 07:15:17.986790] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:53.805 [2024-11-20 07:15:17.986968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57940 ] 00:17:54.064 [2024-11-20 07:15:18.130851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.064 [2024-11-20 07:15:18.168416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.064 [2024-11-20 07:15:18.214908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:54.996 07:15:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.996 07:15:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:17:54.996 07:15:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 57940 00:17:54.996 07:15:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 57940 00:17:54.996 07:15:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:54.996 07:15:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 57940 00:17:54.996 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 57940 ']' 00:17:54.996 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 57940 00:17:54.996 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:17:54.996 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.996 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57940 00:17:54.996 killing process with pid 57940 00:17:54.996 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.996 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.996 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57940' 00:17:54.996 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 57940 00:17:54.996 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 57940 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57940 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 57940 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 57940 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 57940 ']' 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:17:55.254 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (57940) - No such process 00:17:55.254 ERROR: process (pid: 57940) is no longer running 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:17:55.254 00:17:55.254 real 0m1.351s 00:17:55.254 user 0m1.449s 00:17:55.254 sys 0m0.337s 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.254 07:15:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:17:55.254 ************************************ 00:17:55.254 END TEST default_locks 00:17:55.254 ************************************ 00:17:55.254 07:15:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:17:55.254 07:15:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:55.254 07:15:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.254 07:15:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:55.254 ************************************ 00:17:55.254 START TEST default_locks_via_rpc 00:17:55.254 ************************************ 00:17:55.254 07:15:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:17:55.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.254 07:15:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57986 00:17:55.254 07:15:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 57986 00:17:55.254 07:15:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 57986 ']' 00:17:55.254 07:15:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.255 07:15:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:55.255 07:15:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.255 07:15:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.255 07:15:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.255 07:15:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.255 [2024-11-20 07:15:19.380917] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:55.255 [2024-11-20 07:15:19.381131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57986 ] 00:17:55.513 [2024-11-20 07:15:19.521721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.513 [2024-11-20 07:15:19.558640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.513 [2024-11-20 07:15:19.604898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 57986 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 57986 00:17:56.078 07:15:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:56.337 07:15:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 57986 00:17:56.337 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 57986 ']' 00:17:56.337 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 57986 00:17:56.337 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:17:56.337 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.337 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57986 00:17:56.337 killing process with pid 57986 00:17:56.337 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.337 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.337 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57986' 00:17:56.337 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 57986 00:17:56.337 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 57986 00:17:56.595 ************************************ 00:17:56.595 END TEST default_locks_via_rpc 00:17:56.595 ************************************ 00:17:56.595 00:17:56.595 real 0m1.281s 00:17:56.595 user 0m1.379s 00:17:56.595 sys 0m0.331s 00:17:56.595 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.595 07:15:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.595 07:15:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:17:56.595 07:15:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:56.595 07:15:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.595 07:15:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:56.595 ************************************ 00:17:56.595 START TEST non_locking_app_on_locked_coremask 00:17:56.595 ************************************ 00:17:56.595 07:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:17:56.595 07:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58032 00:17:56.595 07:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58032 /var/tmp/spdk.sock 00:17:56.595 07:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:56.595 07:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58032 ']' 00:17:56.595 07:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.595 07:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.595 07:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.595 07:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.595 07:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:56.595 [2024-11-20 07:15:20.702458] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:56.595 [2024-11-20 07:15:20.702516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58032 ] 00:17:56.853 [2024-11-20 07:15:20.836457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.853 [2024-11-20 07:15:20.872084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.853 [2024-11-20 07:15:20.915450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:57.425 07:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.425 07:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:17:57.425 07:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58048 00:17:57.425 07:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58048 /var/tmp/spdk2.sock 00:17:57.425 07:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58048 ']' 00:17:57.425 07:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:57.425 07:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:17:57.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:57.425 07:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.425 07:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:57.425 07:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.425 07:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:57.683 [2024-11-20 07:15:21.631214] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:57.683 [2024-11-20 07:15:21.631295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58048 ] 00:17:57.683 [2024-11-20 07:15:21.783365] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:57.683 [2024-11-20 07:15:21.783407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.683 [2024-11-20 07:15:21.854975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.940 [2024-11-20 07:15:21.937173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:58.505 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.506 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:17:58.506 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58032 00:17:58.506 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:58.506 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58032 00:17:58.763 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58032 00:17:58.763 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58032 ']' 00:17:58.763 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58032 00:17:58.763 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:17:58.763 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.763 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58032 00:17:58.763 killing process with pid 58032 00:17:58.763 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.763 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.763 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58032' 00:17:58.763 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58032 00:17:58.764 07:15:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58032 00:17:59.022 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58048 00:17:59.022 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58048 ']' 00:17:59.022 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58048 00:17:59.022 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:17:59.022 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.022 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58048 00:17:59.022 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.022 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.022 killing process with pid 58048 00:17:59.022 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58048' 00:17:59.022 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58048 00:17:59.022 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58048 00:17:59.280 ************************************ 00:17:59.280 END TEST non_locking_app_on_locked_coremask 00:17:59.280 ************************************ 00:17:59.280 00:17:59.280 real 0m2.729s 00:17:59.280 user 0m3.112s 00:17:59.280 sys 0m0.625s 00:17:59.280 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.280 07:15:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:59.280 07:15:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:17:59.280 07:15:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:59.280 07:15:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.280 07:15:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:59.280 ************************************ 00:17:59.280 START TEST locking_app_on_unlocked_coremask 00:17:59.280 ************************************ 00:17:59.280 07:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:17:59.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.280 07:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58098 00:17:59.280 07:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58098 /var/tmp/spdk.sock 00:17:59.280 07:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58098 ']' 00:17:59.280 07:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.280 07:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.280 07:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.280 07:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:17:59.280 07:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.280 07:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:59.280 [2024-11-20 07:15:23.462079] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:17:59.280 [2024-11-20 07:15:23.462145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58098 ] 00:17:59.537 [2024-11-20 07:15:23.594338] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:59.537 [2024-11-20 07:15:23.594377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.537 [2024-11-20 07:15:23.630819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.538 [2024-11-20 07:15:23.676780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:00.471 07:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.471 07:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:00.471 07:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58110 00:18:00.471 07:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58110 /var/tmp/spdk2.sock 00:18:00.471 07:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:00.471 07:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58110 ']' 00:18:00.471 07:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:00.471 07:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.471 07:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:00.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:00.471 07:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.471 07:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:00.471 [2024-11-20 07:15:24.391867] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:00.471 [2024-11-20 07:15:24.392059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58110 ] 00:18:00.471 [2024-11-20 07:15:24.553565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.471 [2024-11-20 07:15:24.625830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.729 [2024-11-20 07:15:24.714411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:01.295 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.295 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:01.295 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58110 00:18:01.295 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58110 00:18:01.295 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:01.554 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58098 00:18:01.554 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58098 ']' 00:18:01.554 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58098 00:18:01.554 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:01.554 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.554 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58098 00:18:01.554 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.554 killing process with pid 58098 00:18:01.554 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.554 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58098' 00:18:01.554 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58098 00:18:01.554 07:15:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58098 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58110 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58110 ']' 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58110 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58110 00:18:02.120 killing process with pid 58110 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58110' 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58110 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58110 00:18:02.120 00:18:02.120 real 0m2.813s 00:18:02.120 user 0m3.221s 00:18:02.120 sys 0m0.640s 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:02.120 ************************************ 00:18:02.120 END TEST locking_app_on_unlocked_coremask 00:18:02.120 ************************************ 00:18:02.120 07:15:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:18:02.120 07:15:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:02.120 07:15:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.120 07:15:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:02.120 ************************************ 00:18:02.120 START TEST locking_app_on_locked_coremask 00:18:02.120 ************************************ 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:18:02.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58165 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58165 /var/tmp/spdk.sock 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58165 ']' 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.120 07:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:02.379 [2024-11-20 07:15:26.319317] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:02.379 [2024-11-20 07:15:26.319382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58165 ] 00:18:02.379 [2024-11-20 07:15:26.453523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.379 [2024-11-20 07:15:26.485203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.379 [2024-11-20 07:15:26.525725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:03.311 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.311 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:03.311 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:03.311 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58181 00:18:03.311 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58181 /var/tmp/spdk2.sock 00:18:03.311 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:18:03.311 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58181 /var/tmp/spdk2.sock 00:18:03.311 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:18:03.311 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.311 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:18:03.312 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.312 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58181 /var/tmp/spdk2.sock 00:18:03.312 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58181 ']' 00:18:03.312 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:03.312 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.312 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:03.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:03.312 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.312 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:03.312 [2024-11-20 07:15:27.213208] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:03.312 [2024-11-20 07:15:27.213379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58181 ] 00:18:03.312 [2024-11-20 07:15:27.355416] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58165 has claimed it. 00:18:03.312 [2024-11-20 07:15:27.355466] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:03.877 ERROR: process (pid: 58181) is no longer running 00:18:03.877 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58181) - No such process 00:18:03.877 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.877 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:18:03.877 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:18:03.877 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:03.877 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:03.877 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:03.877 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58165 00:18:03.877 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58165 00:18:03.877 07:15:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58165 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58165 ']' 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58165 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58165 00:18:04.135 killing process with pid 58165 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58165' 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58165 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58165 00:18:04.135 ************************************ 00:18:04.135 END TEST locking_app_on_locked_coremask 00:18:04.135 ************************************ 00:18:04.135 00:18:04.135 real 0m2.020s 00:18:04.135 user 0m2.344s 00:18:04.135 sys 0m0.373s 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.135 07:15:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:04.135 07:15:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:18:04.135 07:15:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:04.135 07:15:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.135 07:15:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:04.393 ************************************ 00:18:04.393 START TEST locking_overlapped_coremask 00:18:04.393 ************************************ 00:18:04.393 07:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:18:04.393 07:15:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58221 00:18:04.393 07:15:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58221 /var/tmp/spdk.sock 00:18:04.393 07:15:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:04.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.393 07:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58221 ']' 00:18:04.393 07:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.393 07:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.393 07:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.393 07:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.393 07:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:04.393 [2024-11-20 07:15:28.376415] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:04.393 [2024-11-20 07:15:28.376476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58221 ] 00:18:04.393 [2024-11-20 07:15:28.512677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:04.393 [2024-11-20 07:15:28.549980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.393 [2024-11-20 07:15:28.550068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.393 [2024-11-20 07:15:28.550070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.652 [2024-11-20 07:15:28.593946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58239 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58239 /var/tmp/spdk2.sock 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58239 /var/tmp/spdk2.sock 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:18:05.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58239 /var/tmp/spdk2.sock 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58239 ']' 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.259 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:05.259 [2024-11-20 07:15:29.292279] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:05.259 [2024-11-20 07:15:29.292368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58239 ] 00:18:05.259 [2024-11-20 07:15:29.450710] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58221 has claimed it. 00:18:05.259 [2024-11-20 07:15:29.450766] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:05.825 ERROR: process (pid: 58239) is no longer running 00:18:05.825 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58239) - No such process 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58221 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58221 ']' 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58221 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58221 00:18:05.825 killing process with pid 58221 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58221' 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58221 00:18:05.825 07:15:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58221 00:18:06.083 00:18:06.083 real 0m1.854s 00:18:06.083 user 0m5.352s 00:18:06.083 sys 0m0.274s 00:18:06.083 07:15:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.083 ************************************ 00:18:06.083 END TEST locking_overlapped_coremask 00:18:06.083 ************************************ 00:18:06.083 07:15:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:06.083 07:15:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:18:06.083 07:15:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:06.083 07:15:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.083 07:15:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:06.083 ************************************ 00:18:06.083 START TEST locking_overlapped_coremask_via_rpc 00:18:06.083 ************************************ 00:18:06.083 07:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:18:06.083 07:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58279 00:18:06.083 07:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58279 /var/tmp/spdk.sock 00:18:06.083 07:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58279 ']' 00:18:06.083 07:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.083 07:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:18:06.084 07:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.084 07:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.084 07:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.084 07:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.084 [2024-11-20 07:15:30.272457] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:06.084 [2024-11-20 07:15:30.272632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58279 ] 00:18:06.341 [2024-11-20 07:15:30.404782] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:06.341 [2024-11-20 07:15:30.404822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:06.341 [2024-11-20 07:15:30.442036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.341 [2024-11-20 07:15:30.442123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.341 [2024-11-20 07:15:30.442122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.341 [2024-11-20 07:15:30.487276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:07.274 07:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.274 07:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:07.274 07:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58297 00:18:07.274 07:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:18:07.274 07:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58297 /var/tmp/spdk2.sock 00:18:07.274 07:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58297 ']' 00:18:07.274 07:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:07.274 07:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.274 07:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:07.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:07.274 07:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.274 07:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.274 [2024-11-20 07:15:31.182928] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:07.274 [2024-11-20 07:15:31.183142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58297 ] 00:18:07.274 [2024-11-20 07:15:31.338616] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:07.274 [2024-11-20 07:15:31.338653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:07.274 [2024-11-20 07:15:31.413446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.274 [2024-11-20 07:15:31.413511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.274 [2024-11-20 07:15:31.413513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:07.531 [2024-11-20 07:15:31.505514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.097 [2024-11-20 07:15:32.072316] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58279 has claimed it. 00:18:08.097 request: 00:18:08.097 { 00:18:08.097 "method": "framework_enable_cpumask_locks", 00:18:08.097 "req_id": 1 00:18:08.097 } 00:18:08.097 Got JSON-RPC error response 00:18:08.097 response: 00:18:08.097 { 00:18:08.097 "code": -32603, 00:18:08.097 "message": "Failed to claim CPU core: 2" 00:18:08.097 } 00:18:08.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58279 /var/tmp/spdk.sock 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58279 ']' 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58297 /var/tmp/spdk2.sock 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58297 ']' 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.097 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.355 ************************************ 00:18:08.355 END TEST locking_overlapped_coremask_via_rpc 00:18:08.355 ************************************ 00:18:08.355 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.355 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:08.355 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:18:08.356 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:18:08.356 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:18:08.356 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:18:08.356 00:18:08.356 real 0m2.259s 00:18:08.356 user 0m1.049s 00:18:08.356 sys 0m0.141s 00:18:08.356 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.356 07:15:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.356 07:15:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:18:08.356 07:15:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58279 ]] 00:18:08.356 07:15:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58279 00:18:08.356 07:15:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58279 ']' 00:18:08.356 07:15:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58279 00:18:08.356 07:15:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:18:08.356 07:15:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.356 07:15:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58279 00:18:08.356 killing process with pid 58279 00:18:08.356 07:15:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.356 07:15:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.356 07:15:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58279' 00:18:08.356 07:15:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58279 00:18:08.356 07:15:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58279 00:18:08.614 07:15:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58297 ]] 00:18:08.614 07:15:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58297 00:18:08.614 07:15:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58297 ']' 00:18:08.614 07:15:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58297 00:18:08.614 07:15:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:18:08.614 07:15:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.614 07:15:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58297 00:18:08.614 killing process with pid 58297 00:18:08.614 07:15:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:08.614 07:15:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:08.614 07:15:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58297' 00:18:08.614 07:15:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58297 00:18:08.614 07:15:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58297 00:18:08.873 07:15:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:18:08.873 07:15:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:18:08.873 07:15:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58279 ]] 00:18:08.873 07:15:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58279 00:18:08.873 Process with pid 58279 is not found 00:18:08.873 07:15:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58279 ']' 00:18:08.873 07:15:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58279 00:18:08.873 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58279) - No such process 00:18:08.874 07:15:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58279 is not found' 00:18:08.874 07:15:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58297 ]] 00:18:08.874 07:15:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58297 00:18:08.874 07:15:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58297 ']' 00:18:08.874 07:15:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58297 00:18:08.874 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58297) - No such process 00:18:08.874 Process with pid 58297 is not found 00:18:08.874 07:15:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58297 is not found' 00:18:08.874 07:15:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:18:08.874 ************************************ 00:18:08.874 END TEST cpu_locks 00:18:08.874 ************************************ 00:18:08.874 00:18:08.874 real 0m15.146s 00:18:08.874 user 0m27.846s 00:18:08.874 sys 0m3.305s 00:18:08.874 07:15:32 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.874 07:15:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:08.874 ************************************ 00:18:08.874 END TEST event 00:18:08.874 ************************************ 00:18:08.874 00:18:08.874 real 0m38.406s 00:18:08.874 user 1m16.979s 00:18:08.874 sys 0m5.805s 00:18:08.874 07:15:32 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.874 07:15:32 event -- common/autotest_common.sh@10 -- # set +x 00:18:08.874 07:15:33 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:18:08.874 07:15:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:08.874 07:15:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.874 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:18:08.874 ************************************ 00:18:08.874 START TEST thread 00:18:08.874 ************************************ 00:18:08.874 07:15:33 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:18:09.132 * Looking for test storage... 00:18:09.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:18:09.132 07:15:33 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:09.132 07:15:33 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:18:09.132 07:15:33 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:09.132 07:15:33 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:09.132 07:15:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.132 07:15:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.132 07:15:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.132 07:15:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.132 07:15:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.132 07:15:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.132 07:15:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.132 07:15:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.132 07:15:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.132 07:15:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.132 07:15:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.132 07:15:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:18:09.132 07:15:33 thread -- scripts/common.sh@345 -- # : 1 00:18:09.132 07:15:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.132 07:15:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.132 07:15:33 thread -- scripts/common.sh@365 -- # decimal 1 00:18:09.132 07:15:33 thread -- scripts/common.sh@353 -- # local d=1 00:18:09.132 07:15:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.132 07:15:33 thread -- scripts/common.sh@355 -- # echo 1 00:18:09.132 07:15:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.132 07:15:33 thread -- scripts/common.sh@366 -- # decimal 2 00:18:09.132 07:15:33 thread -- scripts/common.sh@353 -- # local d=2 00:18:09.132 07:15:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.132 07:15:33 thread -- scripts/common.sh@355 -- # echo 2 00:18:09.132 07:15:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.132 07:15:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.132 07:15:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.132 07:15:33 thread -- scripts/common.sh@368 -- # return 0 00:18:09.132 07:15:33 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.132 07:15:33 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:09.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.132 --rc genhtml_branch_coverage=1 00:18:09.132 --rc genhtml_function_coverage=1 00:18:09.132 --rc genhtml_legend=1 00:18:09.132 --rc geninfo_all_blocks=1 00:18:09.132 --rc geninfo_unexecuted_blocks=1 00:18:09.132 00:18:09.132 ' 00:18:09.132 07:15:33 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:09.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.132 --rc genhtml_branch_coverage=1 00:18:09.132 --rc genhtml_function_coverage=1 00:18:09.132 --rc genhtml_legend=1 00:18:09.132 --rc geninfo_all_blocks=1 00:18:09.132 --rc geninfo_unexecuted_blocks=1 00:18:09.132 00:18:09.132 ' 00:18:09.132 07:15:33 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:09.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.132 --rc genhtml_branch_coverage=1 00:18:09.132 --rc genhtml_function_coverage=1 00:18:09.132 --rc genhtml_legend=1 00:18:09.132 --rc geninfo_all_blocks=1 00:18:09.132 --rc geninfo_unexecuted_blocks=1 00:18:09.132 00:18:09.132 ' 00:18:09.132 07:15:33 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:09.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.132 --rc genhtml_branch_coverage=1 00:18:09.132 --rc genhtml_function_coverage=1 00:18:09.132 --rc genhtml_legend=1 00:18:09.132 --rc geninfo_all_blocks=1 00:18:09.132 --rc geninfo_unexecuted_blocks=1 00:18:09.132 00:18:09.132 ' 00:18:09.132 07:15:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:18:09.132 07:15:33 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:18:09.132 07:15:33 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.132 07:15:33 thread -- common/autotest_common.sh@10 -- # set +x 00:18:09.132 ************************************ 00:18:09.132 START TEST thread_poller_perf 00:18:09.132 ************************************ 00:18:09.132 07:15:33 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:18:09.132 [2024-11-20 07:15:33.175498] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:09.132 [2024-11-20 07:15:33.175559] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58422 ] 00:18:09.132 [2024-11-20 07:15:33.313198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.423 [2024-11-20 07:15:33.343995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.423 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:18:10.379 [2024-11-20T07:15:34.582Z] ====================================== 00:18:10.379 [2024-11-20T07:15:34.582Z] busy:2606915884 (cyc) 00:18:10.379 [2024-11-20T07:15:34.582Z] total_run_count: 393000 00:18:10.379 [2024-11-20T07:15:34.582Z] tsc_hz: 2600000000 (cyc) 00:18:10.379 [2024-11-20T07:15:34.582Z] ====================================== 00:18:10.379 [2024-11-20T07:15:34.582Z] poller_cost: 6633 (cyc), 2551 (nsec) 00:18:10.379 00:18:10.379 real 0m1.224s 00:18:10.379 user 0m1.090s 00:18:10.379 sys 0m0.028s 00:18:10.379 07:15:34 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.379 07:15:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:18:10.379 ************************************ 00:18:10.379 END TEST thread_poller_perf 00:18:10.379 ************************************ 00:18:10.379 07:15:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:10.379 07:15:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:18:10.379 07:15:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.379 07:15:34 thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.379 ************************************ 00:18:10.379 START TEST thread_poller_perf 00:18:10.379 ************************************ 00:18:10.379 07:15:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:10.379 [2024-11-20 07:15:34.438777] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:10.379 [2024-11-20 07:15:34.439029] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58452 ] 00:18:10.379 [2024-11-20 07:15:34.573516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.637 [2024-11-20 07:15:34.609234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.637 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:18:11.572 [2024-11-20T07:15:35.775Z] ====================================== 00:18:11.572 [2024-11-20T07:15:35.775Z] busy:2601766166 (cyc) 00:18:11.572 [2024-11-20T07:15:35.775Z] total_run_count: 4106000 00:18:11.572 [2024-11-20T07:15:35.775Z] tsc_hz: 2600000000 (cyc) 00:18:11.572 [2024-11-20T07:15:35.775Z] ====================================== 00:18:11.572 [2024-11-20T07:15:35.775Z] poller_cost: 633 (cyc), 243 (nsec) 00:18:11.572 00:18:11.572 real 0m1.217s 00:18:11.572 user 0m1.079s 00:18:11.572 sys 0m0.031s 00:18:11.572 07:15:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.572 07:15:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:18:11.572 ************************************ 00:18:11.572 END TEST thread_poller_perf 00:18:11.572 ************************************ 00:18:11.572 07:15:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:18:11.572 ************************************ 00:18:11.572 END TEST thread 00:18:11.572 ************************************ 00:18:11.572 00:18:11.572 real 0m2.657s 00:18:11.572 user 0m2.280s 00:18:11.572 sys 0m0.166s 00:18:11.572 07:15:35 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.572 07:15:35 thread -- common/autotest_common.sh@10 -- # set +x 00:18:11.572 07:15:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:18:11.572 07:15:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:11.572 07:15:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:11.572 07:15:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.572 07:15:35 -- common/autotest_common.sh@10 -- # set +x 00:18:11.572 ************************************ 00:18:11.572 START TEST app_cmdline 00:18:11.572 ************************************ 00:18:11.572 07:15:35 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:11.572 * Looking for test storage... 00:18:11.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:11.572 07:15:35 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:11.572 07:15:35 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:18:11.572 07:15:35 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:11.829 07:15:35 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.829 07:15:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:18:11.830 07:15:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.830 07:15:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:18:11.830 07:15:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:18:11.830 07:15:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.830 07:15:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:18:11.830 07:15:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.830 07:15:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.830 07:15:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.830 07:15:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:18:11.830 07:15:35 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.830 07:15:35 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:11.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.830 --rc genhtml_branch_coverage=1 00:18:11.830 --rc genhtml_function_coverage=1 00:18:11.830 --rc genhtml_legend=1 00:18:11.830 --rc geninfo_all_blocks=1 00:18:11.830 --rc geninfo_unexecuted_blocks=1 00:18:11.830 00:18:11.830 ' 00:18:11.830 07:15:35 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:11.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.830 --rc genhtml_branch_coverage=1 00:18:11.830 --rc genhtml_function_coverage=1 00:18:11.830 --rc genhtml_legend=1 00:18:11.830 --rc geninfo_all_blocks=1 00:18:11.830 --rc geninfo_unexecuted_blocks=1 00:18:11.830 00:18:11.830 ' 00:18:11.830 07:15:35 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:11.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.830 --rc genhtml_branch_coverage=1 00:18:11.830 --rc genhtml_function_coverage=1 00:18:11.830 --rc genhtml_legend=1 00:18:11.830 --rc geninfo_all_blocks=1 00:18:11.830 --rc geninfo_unexecuted_blocks=1 00:18:11.830 00:18:11.830 ' 00:18:11.830 07:15:35 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:11.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.830 --rc genhtml_branch_coverage=1 00:18:11.830 --rc genhtml_function_coverage=1 00:18:11.830 --rc genhtml_legend=1 00:18:11.830 --rc geninfo_all_blocks=1 00:18:11.830 --rc geninfo_unexecuted_blocks=1 00:18:11.830 00:18:11.830 ' 00:18:11.830 07:15:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:18:11.830 07:15:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58540 00:18:11.830 07:15:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58540 00:18:11.830 07:15:35 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:18:11.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.830 07:15:35 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 58540 ']' 00:18:11.830 07:15:35 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.830 07:15:35 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.830 07:15:35 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.830 07:15:35 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.830 07:15:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 [2024-11-20 07:15:35.880930] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:11.830 [2024-11-20 07:15:35.880990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58540 ] 00:18:11.830 [2024-11-20 07:15:36.019476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.087 [2024-11-20 07:15:36.054157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.087 [2024-11-20 07:15:36.097333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:12.651 07:15:36 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.651 07:15:36 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:18:12.651 07:15:36 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:18:12.908 { 00:18:12.908 "version": "SPDK v25.01-pre git sha1 400f484f7", 00:18:12.908 "fields": { 00:18:12.908 "major": 25, 00:18:12.908 "minor": 1, 00:18:12.908 "patch": 0, 00:18:12.908 "suffix": "-pre", 00:18:12.908 "commit": "400f484f7" 00:18:12.908 } 00:18:12.908 } 00:18:12.908 07:15:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:18:12.908 07:15:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:18:12.908 07:15:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:18:12.908 07:15:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:18:12.908 07:15:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:18:12.908 07:15:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:12.908 07:15:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.908 07:15:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:18:12.908 07:15:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:18:12.908 07:15:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:12.908 07:15:36 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:13.166 request: 00:18:13.166 { 00:18:13.166 "method": "env_dpdk_get_mem_stats", 00:18:13.166 "req_id": 1 00:18:13.166 } 00:18:13.166 Got JSON-RPC error response 00:18:13.166 response: 00:18:13.166 { 00:18:13.166 "code": -32601, 00:18:13.166 "message": "Method not found" 00:18:13.166 } 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.166 07:15:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58540 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 58540 ']' 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 58540 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58540 00:18:13.166 killing process with pid 58540 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58540' 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@973 -- # kill 58540 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@978 -- # wait 58540 00:18:13.166 ************************************ 00:18:13.166 END TEST app_cmdline 00:18:13.166 ************************************ 00:18:13.166 00:18:13.166 real 0m1.638s 00:18:13.166 user 0m2.014s 00:18:13.166 sys 0m0.306s 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.166 07:15:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:13.423 07:15:37 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:13.423 07:15:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:13.423 07:15:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.423 07:15:37 -- common/autotest_common.sh@10 -- # set +x 00:18:13.423 ************************************ 00:18:13.424 START TEST version 00:18:13.424 ************************************ 00:18:13.424 07:15:37 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:13.424 * Looking for test storage... 00:18:13.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:13.424 07:15:37 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:13.424 07:15:37 version -- common/autotest_common.sh@1693 -- # lcov --version 00:18:13.424 07:15:37 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:13.424 07:15:37 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:13.424 07:15:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.424 07:15:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.424 07:15:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.424 07:15:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.424 07:15:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.424 07:15:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.424 07:15:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.424 07:15:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.424 07:15:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.424 07:15:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.424 07:15:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.424 07:15:37 version -- scripts/common.sh@344 -- # case "$op" in 00:18:13.424 07:15:37 version -- scripts/common.sh@345 -- # : 1 00:18:13.424 07:15:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.424 07:15:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.424 07:15:37 version -- scripts/common.sh@365 -- # decimal 1 00:18:13.424 07:15:37 version -- scripts/common.sh@353 -- # local d=1 00:18:13.424 07:15:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.424 07:15:37 version -- scripts/common.sh@355 -- # echo 1 00:18:13.424 07:15:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.424 07:15:37 version -- scripts/common.sh@366 -- # decimal 2 00:18:13.424 07:15:37 version -- scripts/common.sh@353 -- # local d=2 00:18:13.424 07:15:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.424 07:15:37 version -- scripts/common.sh@355 -- # echo 2 00:18:13.424 07:15:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.424 07:15:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.424 07:15:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.424 07:15:37 version -- scripts/common.sh@368 -- # return 0 00:18:13.424 07:15:37 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.424 07:15:37 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:13.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.424 --rc genhtml_branch_coverage=1 00:18:13.424 --rc genhtml_function_coverage=1 00:18:13.424 --rc genhtml_legend=1 00:18:13.424 --rc geninfo_all_blocks=1 00:18:13.424 --rc geninfo_unexecuted_blocks=1 00:18:13.424 00:18:13.424 ' 00:18:13.424 07:15:37 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:13.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.424 --rc genhtml_branch_coverage=1 00:18:13.424 --rc genhtml_function_coverage=1 00:18:13.424 --rc genhtml_legend=1 00:18:13.424 --rc geninfo_all_blocks=1 00:18:13.424 --rc geninfo_unexecuted_blocks=1 00:18:13.424 00:18:13.424 ' 00:18:13.424 07:15:37 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:13.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.424 --rc genhtml_branch_coverage=1 00:18:13.424 --rc genhtml_function_coverage=1 00:18:13.424 --rc genhtml_legend=1 00:18:13.424 --rc geninfo_all_blocks=1 00:18:13.424 --rc geninfo_unexecuted_blocks=1 00:18:13.424 00:18:13.424 ' 00:18:13.424 07:15:37 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:13.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.424 --rc genhtml_branch_coverage=1 00:18:13.424 --rc genhtml_function_coverage=1 00:18:13.424 --rc genhtml_legend=1 00:18:13.424 --rc geninfo_all_blocks=1 00:18:13.424 --rc geninfo_unexecuted_blocks=1 00:18:13.424 00:18:13.424 ' 00:18:13.424 07:15:37 version -- app/version.sh@17 -- # get_header_version major 00:18:13.424 07:15:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:13.424 07:15:37 version -- app/version.sh@14 -- # cut -f2 00:18:13.424 07:15:37 version -- app/version.sh@14 -- # tr -d '"' 00:18:13.424 07:15:37 version -- app/version.sh@17 -- # major=25 00:18:13.424 07:15:37 version -- app/version.sh@18 -- # get_header_version minor 00:18:13.424 07:15:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:13.424 07:15:37 version -- app/version.sh@14 -- # cut -f2 00:18:13.424 07:15:37 version -- app/version.sh@14 -- # tr -d '"' 00:18:13.424 07:15:37 version -- app/version.sh@18 -- # minor=1 00:18:13.424 07:15:37 version -- app/version.sh@19 -- # get_header_version patch 00:18:13.424 07:15:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:13.424 07:15:37 version -- app/version.sh@14 -- # cut -f2 00:18:13.424 07:15:37 version -- app/version.sh@14 -- # tr -d '"' 00:18:13.424 07:15:37 version -- app/version.sh@19 -- # patch=0 00:18:13.424 07:15:37 version -- app/version.sh@20 -- # get_header_version suffix 00:18:13.424 07:15:37 version -- app/version.sh@14 -- # cut -f2 00:18:13.424 07:15:37 version -- app/version.sh@14 -- # tr -d '"' 00:18:13.424 07:15:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:13.424 07:15:37 version -- app/version.sh@20 -- # suffix=-pre 00:18:13.424 07:15:37 version -- app/version.sh@22 -- # version=25.1 00:18:13.424 07:15:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:18:13.424 07:15:37 version -- app/version.sh@28 -- # version=25.1rc0 00:18:13.424 07:15:37 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:13.424 07:15:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:18:13.424 07:15:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:18:13.424 07:15:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:18:13.424 ************************************ 00:18:13.424 END TEST version 00:18:13.424 ************************************ 00:18:13.424 00:18:13.424 real 0m0.200s 00:18:13.424 user 0m0.120s 00:18:13.424 sys 0m0.106s 00:18:13.424 07:15:37 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.424 07:15:37 version -- common/autotest_common.sh@10 -- # set +x 00:18:13.424 07:15:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:18:13.424 07:15:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:18:13.424 07:15:37 -- spdk/autotest.sh@194 -- # uname -s 00:18:13.681 07:15:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:13.681 07:15:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:13.681 07:15:37 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:18:13.681 07:15:37 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:18:13.681 07:15:37 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:18:13.681 07:15:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:13.681 07:15:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.681 07:15:37 -- common/autotest_common.sh@10 -- # set +x 00:18:13.681 ************************************ 00:18:13.681 START TEST spdk_dd 00:18:13.681 ************************************ 00:18:13.681 07:15:37 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:18:13.681 * Looking for test storage... 00:18:13.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:13.681 07:15:37 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:13.681 07:15:37 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:18:13.682 07:15:37 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:13.682 07:15:37 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@345 -- # : 1 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@368 -- # return 0 00:18:13.682 07:15:37 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.682 07:15:37 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:13.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.682 --rc genhtml_branch_coverage=1 00:18:13.682 --rc genhtml_function_coverage=1 00:18:13.682 --rc genhtml_legend=1 00:18:13.682 --rc geninfo_all_blocks=1 00:18:13.682 --rc geninfo_unexecuted_blocks=1 00:18:13.682 00:18:13.682 ' 00:18:13.682 07:15:37 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:13.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.682 --rc genhtml_branch_coverage=1 00:18:13.682 --rc genhtml_function_coverage=1 00:18:13.682 --rc genhtml_legend=1 00:18:13.682 --rc geninfo_all_blocks=1 00:18:13.682 --rc geninfo_unexecuted_blocks=1 00:18:13.682 00:18:13.682 ' 00:18:13.682 07:15:37 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:13.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.682 --rc genhtml_branch_coverage=1 00:18:13.682 --rc genhtml_function_coverage=1 00:18:13.682 --rc genhtml_legend=1 00:18:13.682 --rc geninfo_all_blocks=1 00:18:13.682 --rc geninfo_unexecuted_blocks=1 00:18:13.682 00:18:13.682 ' 00:18:13.682 07:15:37 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:13.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.682 --rc genhtml_branch_coverage=1 00:18:13.682 --rc genhtml_function_coverage=1 00:18:13.682 --rc genhtml_legend=1 00:18:13.682 --rc geninfo_all_blocks=1 00:18:13.682 --rc geninfo_unexecuted_blocks=1 00:18:13.682 00:18:13.682 ' 00:18:13.682 07:15:37 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.682 07:15:37 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.682 07:15:37 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.682 07:15:37 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.682 07:15:37 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.682 07:15:37 spdk_dd -- paths/export.sh@5 -- # export PATH 00:18:13.682 07:15:37 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.682 07:15:37 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:13.941 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:13.941 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:13.941 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:13.941 07:15:38 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:18:13.941 07:15:38 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@233 -- # local class 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@235 -- # local progif 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@236 -- # class=01 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@18 -- # local i 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@27 -- # return 0 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@18 -- # local i 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@27 -- # return 0 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:18:13.941 07:15:38 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:13.941 07:15:38 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@139 -- # local lib 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:18:13.941 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:18:13.942 * spdk_dd linked to liburing 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:18:13.942 07:15:38 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:18:13.942 07:15:38 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:18:13.943 07:15:38 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:18:13.943 07:15:38 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:18:13.943 07:15:38 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:18:13.943 07:15:38 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:18:13.943 07:15:38 spdk_dd -- dd/common.sh@153 -- # return 0 00:18:13.943 07:15:38 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:18:13.943 07:15:38 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:18:13.943 07:15:38 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:13.943 07:15:38 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.943 07:15:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:18:13.943 ************************************ 00:18:13.943 START TEST spdk_dd_basic_rw 00:18:13.943 ************************************ 00:18:13.943 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:18:14.201 * Looking for test storage... 00:18:14.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:14.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.201 --rc genhtml_branch_coverage=1 00:18:14.201 --rc genhtml_function_coverage=1 00:18:14.201 --rc genhtml_legend=1 00:18:14.201 --rc geninfo_all_blocks=1 00:18:14.201 --rc geninfo_unexecuted_blocks=1 00:18:14.201 00:18:14.201 ' 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:14.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.201 --rc genhtml_branch_coverage=1 00:18:14.201 --rc genhtml_function_coverage=1 00:18:14.201 --rc genhtml_legend=1 00:18:14.201 --rc geninfo_all_blocks=1 00:18:14.201 --rc geninfo_unexecuted_blocks=1 00:18:14.201 00:18:14.201 ' 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:14.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.201 --rc genhtml_branch_coverage=1 00:18:14.201 --rc genhtml_function_coverage=1 00:18:14.201 --rc genhtml_legend=1 00:18:14.201 --rc geninfo_all_blocks=1 00:18:14.201 --rc geninfo_unexecuted_blocks=1 00:18:14.201 00:18:14.201 ' 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:14.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.201 --rc genhtml_branch_coverage=1 00:18:14.201 --rc genhtml_function_coverage=1 00:18:14.201 --rc genhtml_legend=1 00:18:14.201 --rc geninfo_all_blocks=1 00:18:14.201 --rc geninfo_unexecuted_blocks=1 00:18:14.201 00:18:14.201 ' 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.201 07:15:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:18:14.202 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:18:14.462 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:18:14.462 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:18:14.463 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:18:14.463 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:18:14.463 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:18:14.463 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:18:14.464 ************************************ 00:18:14.464 START TEST dd_bs_lt_native_bs 00:18:14.464 ************************************ 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:14.464 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:14.464 { 00:18:14.464 "subsystems": [ 00:18:14.464 { 00:18:14.464 "subsystem": "bdev", 00:18:14.464 "config": [ 00:18:14.464 { 00:18:14.464 "params": { 00:18:14.464 "trtype": "pcie", 00:18:14.464 "traddr": "0000:00:10.0", 00:18:14.464 "name": "Nvme0" 00:18:14.464 }, 00:18:14.464 "method": "bdev_nvme_attach_controller" 00:18:14.464 }, 00:18:14.464 { 00:18:14.464 "method": "bdev_wait_for_examine" 00:18:14.464 } 00:18:14.464 ] 00:18:14.464 } 00:18:14.464 ] 00:18:14.464 } 00:18:14.464 [2024-11-20 07:15:38.482028] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:14.464 [2024-11-20 07:15:38.482183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58875 ] 00:18:14.464 [2024-11-20 07:15:38.623325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.464 [2024-11-20 07:15:38.658960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.722 [2024-11-20 07:15:38.689371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:14.722 [2024-11-20 07:15:38.783016] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:18:14.722 [2024-11-20 07:15:38.783066] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:14.722 [2024-11-20 07:15:38.849101] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:14.722 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:18:14.722 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.722 ************************************ 00:18:14.722 END TEST dd_bs_lt_native_bs 00:18:14.722 ************************************ 00:18:14.722 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:18:14.722 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:18:14.722 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:18:14.722 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.722 00:18:14.723 real 0m0.446s 00:18:14.723 user 0m0.286s 00:18:14.723 sys 0m0.096s 00:18:14.723 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.723 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:18:14.723 07:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:18:14.723 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:14.723 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.723 07:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:18:14.980 ************************************ 00:18:14.980 START TEST dd_rw 00:18:14.980 ************************************ 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:18:14.980 07:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:15.238 07:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:18:15.238 07:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:18:15.238 07:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:15.238 07:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:15.238 [2024-11-20 07:15:39.406152] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:15.238 [2024-11-20 07:15:39.406434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58906 ] 00:18:15.238 { 00:18:15.238 "subsystems": [ 00:18:15.238 { 00:18:15.238 "subsystem": "bdev", 00:18:15.238 "config": [ 00:18:15.238 { 00:18:15.238 "params": { 00:18:15.238 "trtype": "pcie", 00:18:15.238 "traddr": "0000:00:10.0", 00:18:15.238 "name": "Nvme0" 00:18:15.238 }, 00:18:15.238 "method": "bdev_nvme_attach_controller" 00:18:15.238 }, 00:18:15.238 { 00:18:15.238 "method": "bdev_wait_for_examine" 00:18:15.238 } 00:18:15.238 ] 00:18:15.238 } 00:18:15.238 ] 00:18:15.238 } 00:18:15.496 [2024-11-20 07:15:39.554961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.496 [2024-11-20 07:15:39.591679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.496 [2024-11-20 07:15:39.622377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:15.754  [2024-11-20T07:15:39.957Z] Copying: 60/60 [kB] (average 29 MBps) 00:18:15.754 00:18:15.754 07:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:18:15.754 07:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:18:15.754 07:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:15.754 07:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:15.754 [2024-11-20 07:15:39.860109] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:15.754 [2024-11-20 07:15:39.860169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58914 ] 00:18:15.754 { 00:18:15.754 "subsystems": [ 00:18:15.754 { 00:18:15.754 "subsystem": "bdev", 00:18:15.754 "config": [ 00:18:15.754 { 00:18:15.754 "params": { 00:18:15.754 "trtype": "pcie", 00:18:15.754 "traddr": "0000:00:10.0", 00:18:15.754 "name": "Nvme0" 00:18:15.754 }, 00:18:15.754 "method": "bdev_nvme_attach_controller" 00:18:15.754 }, 00:18:15.754 { 00:18:15.754 "method": "bdev_wait_for_examine" 00:18:15.754 } 00:18:15.754 ] 00:18:15.754 } 00:18:15.754 ] 00:18:15.754 } 00:18:16.072 [2024-11-20 07:15:40.002330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.072 [2024-11-20 07:15:40.038613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.072 [2024-11-20 07:15:40.069123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:16.072  [2024-11-20T07:15:40.275Z] Copying: 60/60 [kB] (average 19 MBps) 00:18:16.072 00:18:16.072 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:16.330 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:18:16.330 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:16.330 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:18:16.330 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:18:16.330 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:18:16.330 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:18:16.330 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:16.330 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:18:16.330 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:16.330 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:16.330 { 00:18:16.330 "subsystems": [ 00:18:16.330 { 00:18:16.330 "subsystem": "bdev", 00:18:16.330 "config": [ 00:18:16.330 { 00:18:16.330 "params": { 00:18:16.330 "trtype": "pcie", 00:18:16.330 "traddr": "0000:00:10.0", 00:18:16.330 "name": "Nvme0" 00:18:16.330 }, 00:18:16.330 "method": "bdev_nvme_attach_controller" 00:18:16.330 }, 00:18:16.330 { 00:18:16.330 "method": "bdev_wait_for_examine" 00:18:16.330 } 00:18:16.330 ] 00:18:16.330 } 00:18:16.330 ] 00:18:16.330 } 00:18:16.330 [2024-11-20 07:15:40.314027] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:16.330 [2024-11-20 07:15:40.314240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58935 ] 00:18:16.330 [2024-11-20 07:15:40.461114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.330 [2024-11-20 07:15:40.496184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.330 [2024-11-20 07:15:40.526404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:16.589  [2024-11-20T07:15:40.792Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:16.589 00:18:16.589 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:16.589 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:18:16.589 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:18:16.589 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:18:16.589 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:18:16.589 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:18:16.589 07:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:17.155 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:18:17.155 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:18:17.155 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:17.155 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:17.155 [2024-11-20 07:15:41.110378] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:17.155 [2024-11-20 07:15:41.110439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58954 ] 00:18:17.155 { 00:18:17.155 "subsystems": [ 00:18:17.155 { 00:18:17.155 "subsystem": "bdev", 00:18:17.155 "config": [ 00:18:17.155 { 00:18:17.155 "params": { 00:18:17.155 "trtype": "pcie", 00:18:17.155 "traddr": "0000:00:10.0", 00:18:17.155 "name": "Nvme0" 00:18:17.155 }, 00:18:17.155 "method": "bdev_nvme_attach_controller" 00:18:17.155 }, 00:18:17.155 { 00:18:17.155 "method": "bdev_wait_for_examine" 00:18:17.155 } 00:18:17.155 ] 00:18:17.155 } 00:18:17.155 ] 00:18:17.155 } 00:18:17.155 [2024-11-20 07:15:41.249112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.155 [2024-11-20 07:15:41.284043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.155 [2024-11-20 07:15:41.314411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:17.413  [2024-11-20T07:15:41.616Z] Copying: 60/60 [kB] (average 58 MBps) 00:18:17.413 00:18:17.413 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:18:17.413 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:18:17.413 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:17.413 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:17.413 [2024-11-20 07:15:41.552328] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:17.413 [2024-11-20 07:15:41.552386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58962 ] 00:18:17.413 { 00:18:17.413 "subsystems": [ 00:18:17.413 { 00:18:17.413 "subsystem": "bdev", 00:18:17.413 "config": [ 00:18:17.413 { 00:18:17.413 "params": { 00:18:17.413 "trtype": "pcie", 00:18:17.413 "traddr": "0000:00:10.0", 00:18:17.413 "name": "Nvme0" 00:18:17.413 }, 00:18:17.413 "method": "bdev_nvme_attach_controller" 00:18:17.413 }, 00:18:17.413 { 00:18:17.413 "method": "bdev_wait_for_examine" 00:18:17.413 } 00:18:17.413 ] 00:18:17.413 } 00:18:17.413 ] 00:18:17.414 } 00:18:17.672 [2024-11-20 07:15:41.693460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.672 [2024-11-20 07:15:41.728879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.672 [2024-11-20 07:15:41.759589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:17.672  [2024-11-20T07:15:42.132Z] Copying: 60/60 [kB] (average 29 MBps) 00:18:17.929 00:18:17.929 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:17.929 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:18:17.929 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:17.929 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:18:17.929 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:18:17.929 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:18:17.929 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:18:17.929 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:17.929 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:18:17.929 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:17.929 07:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:17.929 { 00:18:17.929 "subsystems": [ 00:18:17.929 { 00:18:17.929 "subsystem": "bdev", 00:18:17.929 "config": [ 00:18:17.929 { 00:18:17.929 "params": { 00:18:17.929 "trtype": "pcie", 00:18:17.929 "traddr": "0000:00:10.0", 00:18:17.929 "name": "Nvme0" 00:18:17.929 }, 00:18:17.929 "method": "bdev_nvme_attach_controller" 00:18:17.929 }, 00:18:17.929 { 00:18:17.929 "method": "bdev_wait_for_examine" 00:18:17.929 } 00:18:17.929 ] 00:18:17.929 } 00:18:17.929 ] 00:18:17.929 } 00:18:17.929 [2024-11-20 07:15:41.997941] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:17.929 [2024-11-20 07:15:41.997998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58983 ] 00:18:17.929 [2024-11-20 07:15:42.129276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.187 [2024-11-20 07:15:42.164349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.187 [2024-11-20 07:15:42.194539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:18.187  [2024-11-20T07:15:42.390Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:18.187 00:18:18.444 07:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:18:18.444 07:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:18.444 07:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:18:18.444 07:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:18:18.444 07:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:18:18.444 07:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:18:18.444 07:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:18:18.444 07:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 07:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:18:18.700 07:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:18:18.700 07:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:18.700 07:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 [2024-11-20 07:15:42.851586] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:18.700 [2024-11-20 07:15:42.851772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58991 ] 00:18:18.700 { 00:18:18.700 "subsystems": [ 00:18:18.700 { 00:18:18.700 "subsystem": "bdev", 00:18:18.700 "config": [ 00:18:18.700 { 00:18:18.700 "params": { 00:18:18.700 "trtype": "pcie", 00:18:18.700 "traddr": "0000:00:10.0", 00:18:18.701 "name": "Nvme0" 00:18:18.701 }, 00:18:18.701 "method": "bdev_nvme_attach_controller" 00:18:18.701 }, 00:18:18.701 { 00:18:18.701 "method": "bdev_wait_for_examine" 00:18:18.701 } 00:18:18.701 ] 00:18:18.701 } 00:18:18.701 ] 00:18:18.701 } 00:18:18.958 [2024-11-20 07:15:42.986306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.958 [2024-11-20 07:15:43.016849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.958 [2024-11-20 07:15:43.044752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:18.958  [2024-11-20T07:15:43.444Z] Copying: 56/56 [kB] (average 54 MBps) 00:18:19.242 00:18:19.242 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:18:19.242 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:18:19.242 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:19.242 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:19.242 [2024-11-20 07:15:43.262874] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:19.242 [2024-11-20 07:15:43.263014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59010 ] 00:18:19.242 { 00:18:19.242 "subsystems": [ 00:18:19.242 { 00:18:19.242 "subsystem": "bdev", 00:18:19.242 "config": [ 00:18:19.242 { 00:18:19.242 "params": { 00:18:19.242 "trtype": "pcie", 00:18:19.242 "traddr": "0000:00:10.0", 00:18:19.242 "name": "Nvme0" 00:18:19.242 }, 00:18:19.242 "method": "bdev_nvme_attach_controller" 00:18:19.242 }, 00:18:19.242 { 00:18:19.242 "method": "bdev_wait_for_examine" 00:18:19.242 } 00:18:19.242 ] 00:18:19.242 } 00:18:19.242 ] 00:18:19.242 } 00:18:19.242 [2024-11-20 07:15:43.397492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.242 [2024-11-20 07:15:43.427522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.517 [2024-11-20 07:15:43.455857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:19.517  [2024-11-20T07:15:43.720Z] Copying: 56/56 [kB] (average 27 MBps) 00:18:19.517 00:18:19.517 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:19.517 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:18:19.517 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:19.517 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:18:19.517 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:18:19.517 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:18:19.517 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:18:19.517 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:19.517 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:18:19.517 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:19.517 07:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:19.517 [2024-11-20 07:15:43.689384] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:19.517 [2024-11-20 07:15:43.689443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59020 ] 00:18:19.517 { 00:18:19.517 "subsystems": [ 00:18:19.517 { 00:18:19.517 "subsystem": "bdev", 00:18:19.517 "config": [ 00:18:19.517 { 00:18:19.517 "params": { 00:18:19.517 "trtype": "pcie", 00:18:19.517 "traddr": "0000:00:10.0", 00:18:19.517 "name": "Nvme0" 00:18:19.517 }, 00:18:19.518 "method": "bdev_nvme_attach_controller" 00:18:19.518 }, 00:18:19.518 { 00:18:19.518 "method": "bdev_wait_for_examine" 00:18:19.518 } 00:18:19.518 ] 00:18:19.518 } 00:18:19.518 ] 00:18:19.518 } 00:18:19.775 [2024-11-20 07:15:43.825963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.775 [2024-11-20 07:15:43.856336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.775 [2024-11-20 07:15:43.884856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:20.034  [2024-11-20T07:15:44.237Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:20.034 00:18:20.034 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:20.034 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:18:20.034 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:18:20.034 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:18:20.034 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:18:20.034 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:18:20.034 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:20.599 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:18:20.599 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:18:20.599 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:20.599 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:20.599 [2024-11-20 07:15:44.594889] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:20.599 [2024-11-20 07:15:44.594953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59039 ] 00:18:20.599 { 00:18:20.599 "subsystems": [ 00:18:20.599 { 00:18:20.599 "subsystem": "bdev", 00:18:20.599 "config": [ 00:18:20.599 { 00:18:20.599 "params": { 00:18:20.599 "trtype": "pcie", 00:18:20.599 "traddr": "0000:00:10.0", 00:18:20.599 "name": "Nvme0" 00:18:20.599 }, 00:18:20.599 "method": "bdev_nvme_attach_controller" 00:18:20.599 }, 00:18:20.599 { 00:18:20.599 "method": "bdev_wait_for_examine" 00:18:20.599 } 00:18:20.599 ] 00:18:20.599 } 00:18:20.599 ] 00:18:20.599 } 00:18:20.599 [2024-11-20 07:15:44.736622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.599 [2024-11-20 07:15:44.766915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.599 [2024-11-20 07:15:44.794885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:20.858  [2024-11-20T07:15:45.061Z] Copying: 56/56 [kB] (average 54 MBps) 00:18:20.858 00:18:20.858 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:18:20.858 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:18:20.858 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:20.858 07:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:20.858 { 00:18:20.858 "subsystems": [ 00:18:20.858 { 00:18:20.858 "subsystem": "bdev", 00:18:20.858 "config": [ 00:18:20.858 { 00:18:20.858 "params": { 00:18:20.858 "trtype": "pcie", 00:18:20.858 "traddr": "0000:00:10.0", 00:18:20.858 "name": "Nvme0" 00:18:20.858 }, 00:18:20.858 "method": "bdev_nvme_attach_controller" 00:18:20.858 }, 00:18:20.858 { 00:18:20.858 "method": "bdev_wait_for_examine" 00:18:20.858 } 00:18:20.858 ] 00:18:20.858 } 00:18:20.858 ] 00:18:20.858 } 00:18:20.858 [2024-11-20 07:15:45.015485] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:20.858 [2024-11-20 07:15:45.015538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59053 ] 00:18:21.116 [2024-11-20 07:15:45.156752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.116 [2024-11-20 07:15:45.190882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.116 [2024-11-20 07:15:45.220579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:21.116  [2024-11-20T07:15:45.577Z] Copying: 56/56 [kB] (average 54 MBps) 00:18:21.374 00:18:21.374 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:21.374 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:18:21.374 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:21.374 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:18:21.374 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:18:21.374 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:18:21.374 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:18:21.374 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:21.374 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:18:21.374 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:21.374 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:21.374 { 00:18:21.374 "subsystems": [ 00:18:21.374 { 00:18:21.374 "subsystem": "bdev", 00:18:21.374 "config": [ 00:18:21.374 { 00:18:21.374 "params": { 00:18:21.374 "trtype": "pcie", 00:18:21.374 "traddr": "0000:00:10.0", 00:18:21.374 "name": "Nvme0" 00:18:21.374 }, 00:18:21.374 "method": "bdev_nvme_attach_controller" 00:18:21.374 }, 00:18:21.374 { 00:18:21.374 "method": "bdev_wait_for_examine" 00:18:21.374 } 00:18:21.374 ] 00:18:21.374 } 00:18:21.374 ] 00:18:21.374 } 00:18:21.374 [2024-11-20 07:15:45.458839] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:21.374 [2024-11-20 07:15:45.458894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59068 ] 00:18:21.632 [2024-11-20 07:15:45.589828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.632 [2024-11-20 07:15:45.624730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.632 [2024-11-20 07:15:45.655080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:21.632  [2024-11-20T07:15:46.093Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:21.890 00:18:21.890 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:18:21.890 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:21.890 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:18:21.890 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:18:21.890 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:18:21.890 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:18:21.890 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:18:21.890 07:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:22.149 07:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:18:22.149 07:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:18:22.149 07:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:22.149 07:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:22.149 [2024-11-20 07:15:46.256846] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:22.149 [2024-11-20 07:15:46.257019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59087 ] 00:18:22.149 { 00:18:22.149 "subsystems": [ 00:18:22.149 { 00:18:22.149 "subsystem": "bdev", 00:18:22.149 "config": [ 00:18:22.149 { 00:18:22.149 "params": { 00:18:22.149 "trtype": "pcie", 00:18:22.149 "traddr": "0000:00:10.0", 00:18:22.149 "name": "Nvme0" 00:18:22.149 }, 00:18:22.149 "method": "bdev_nvme_attach_controller" 00:18:22.149 }, 00:18:22.149 { 00:18:22.149 "method": "bdev_wait_for_examine" 00:18:22.149 } 00:18:22.149 ] 00:18:22.149 } 00:18:22.149 ] 00:18:22.149 } 00:18:22.407 [2024-11-20 07:15:46.396660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.407 [2024-11-20 07:15:46.431843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.407 [2024-11-20 07:15:46.461771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:22.407  [2024-11-20T07:15:46.867Z] Copying: 48/48 [kB] (average 46 MBps) 00:18:22.664 00:18:22.664 07:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:18:22.664 07:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:18:22.664 07:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:22.664 07:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:22.664 [2024-11-20 07:15:46.694807] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:22.664 [2024-11-20 07:15:46.694866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59095 ] 00:18:22.664 { 00:18:22.664 "subsystems": [ 00:18:22.664 { 00:18:22.664 "subsystem": "bdev", 00:18:22.664 "config": [ 00:18:22.664 { 00:18:22.664 "params": { 00:18:22.664 "trtype": "pcie", 00:18:22.664 "traddr": "0000:00:10.0", 00:18:22.664 "name": "Nvme0" 00:18:22.664 }, 00:18:22.664 "method": "bdev_nvme_attach_controller" 00:18:22.664 }, 00:18:22.664 { 00:18:22.664 "method": "bdev_wait_for_examine" 00:18:22.664 } 00:18:22.664 ] 00:18:22.664 } 00:18:22.664 ] 00:18:22.664 } 00:18:22.664 [2024-11-20 07:15:46.829972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.922 [2024-11-20 07:15:46.864367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.922 [2024-11-20 07:15:46.894901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:22.922  [2024-11-20T07:15:47.125Z] Copying: 48/48 [kB] (average 46 MBps) 00:18:22.922 00:18:22.922 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:22.922 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:18:22.922 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:22.922 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:18:22.922 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:18:22.922 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:18:22.922 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:18:22.922 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:22.922 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:18:22.922 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:22.922 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:23.180 [2024-11-20 07:15:47.137306] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:23.180 [2024-11-20 07:15:47.137363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59111 ] 00:18:23.180 { 00:18:23.180 "subsystems": [ 00:18:23.180 { 00:18:23.180 "subsystem": "bdev", 00:18:23.180 "config": [ 00:18:23.180 { 00:18:23.180 "params": { 00:18:23.180 "trtype": "pcie", 00:18:23.180 "traddr": "0000:00:10.0", 00:18:23.180 "name": "Nvme0" 00:18:23.180 }, 00:18:23.180 "method": "bdev_nvme_attach_controller" 00:18:23.180 }, 00:18:23.180 { 00:18:23.180 "method": "bdev_wait_for_examine" 00:18:23.180 } 00:18:23.180 ] 00:18:23.180 } 00:18:23.180 ] 00:18:23.180 } 00:18:23.180 [2024-11-20 07:15:47.276925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.180 [2024-11-20 07:15:47.312006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.180 [2024-11-20 07:15:47.341850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:23.438  [2024-11-20T07:15:47.641Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:23.438 00:18:23.438 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:23.438 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:18:23.438 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:18:23.438 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:18:23.438 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:18:23.438 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:18:23.438 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:23.697 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:18:23.697 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:18:23.697 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:23.697 07:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:23.954 [2024-11-20 07:15:47.926444] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:23.954 [2024-11-20 07:15:47.926499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59124 ] 00:18:23.954 { 00:18:23.954 "subsystems": [ 00:18:23.954 { 00:18:23.954 "subsystem": "bdev", 00:18:23.954 "config": [ 00:18:23.954 { 00:18:23.954 "params": { 00:18:23.954 "trtype": "pcie", 00:18:23.954 "traddr": "0000:00:10.0", 00:18:23.954 "name": "Nvme0" 00:18:23.954 }, 00:18:23.954 "method": "bdev_nvme_attach_controller" 00:18:23.954 }, 00:18:23.954 { 00:18:23.954 "method": "bdev_wait_for_examine" 00:18:23.954 } 00:18:23.954 ] 00:18:23.954 } 00:18:23.954 ] 00:18:23.954 } 00:18:23.954 [2024-11-20 07:15:48.062729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.954 [2024-11-20 07:15:48.097972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.954 [2024-11-20 07:15:48.127611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:24.212  [2024-11-20T07:15:48.415Z] Copying: 48/48 [kB] (average 46 MBps) 00:18:24.212 00:18:24.212 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:18:24.213 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:18:24.213 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:24.213 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:24.213 [2024-11-20 07:15:48.358572] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:24.213 [2024-11-20 07:15:48.358627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59143 ] 00:18:24.213 { 00:18:24.213 "subsystems": [ 00:18:24.213 { 00:18:24.213 "subsystem": "bdev", 00:18:24.213 "config": [ 00:18:24.213 { 00:18:24.213 "params": { 00:18:24.213 "trtype": "pcie", 00:18:24.213 "traddr": "0000:00:10.0", 00:18:24.213 "name": "Nvme0" 00:18:24.213 }, 00:18:24.213 "method": "bdev_nvme_attach_controller" 00:18:24.213 }, 00:18:24.213 { 00:18:24.213 "method": "bdev_wait_for_examine" 00:18:24.213 } 00:18:24.213 ] 00:18:24.213 } 00:18:24.213 ] 00:18:24.213 } 00:18:24.471 [2024-11-20 07:15:48.496107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.471 [2024-11-20 07:15:48.530831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.471 [2024-11-20 07:15:48.561448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:24.471  [2024-11-20T07:15:48.932Z] Copying: 48/48 [kB] (average 46 MBps) 00:18:24.729 00:18:24.729 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:24.729 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:18:24.729 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:24.729 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:18:24.729 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:18:24.729 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:18:24.729 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:18:24.729 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:18:24.729 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:24.729 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:24.729 07:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:24.729 [2024-11-20 07:15:48.798299] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:24.729 [2024-11-20 07:15:48.798351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59153 ] 00:18:24.729 { 00:18:24.729 "subsystems": [ 00:18:24.729 { 00:18:24.729 "subsystem": "bdev", 00:18:24.729 "config": [ 00:18:24.729 { 00:18:24.729 "params": { 00:18:24.729 "trtype": "pcie", 00:18:24.729 "traddr": "0000:00:10.0", 00:18:24.729 "name": "Nvme0" 00:18:24.729 }, 00:18:24.729 "method": "bdev_nvme_attach_controller" 00:18:24.729 }, 00:18:24.729 { 00:18:24.729 "method": "bdev_wait_for_examine" 00:18:24.729 } 00:18:24.729 ] 00:18:24.729 } 00:18:24.729 ] 00:18:24.729 } 00:18:24.987 [2024-11-20 07:15:48.933983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.987 [2024-11-20 07:15:48.969176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.987 [2024-11-20 07:15:49.000125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:24.987  [2024-11-20T07:15:49.448Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:25.245 00:18:25.245 00:18:25.245 real 0m10.271s 00:18:25.245 user 0m7.226s 00:18:25.245 sys 0m3.218s 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:25.245 ************************************ 00:18:25.245 END TEST dd_rw 00:18:25.245 ************************************ 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:18:25.245 ************************************ 00:18:25.245 START TEST dd_rw_offset 00:18:25.245 ************************************ 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=38g7a2bpnyyrvwbn7ll7ck4qdo4k95d6nqawlwyci08041au5wjmdw527clkufsjc1hrkq5tvkl7r9kel8pcdiff2p4r0yk9gzfrfo2o5bcjwxlqek8si3042od66q13aijlnerlpwo40hls1e8vfm3mnry3oz7zk0ts5d9qbgfqqwj4b8nuij5b8pflmzvzwwp362c8dnlqgt369qhdiw8ileknfksle75s24hz0eaovfl53p4zsj4puek3wzjkpuwph8vhx9p621yo84t9rau5pj5u8twit3tc71fznd60wc8byy3qgvyim5sj33gr5czx0h0aj0u3t6il4h9txjwn6bczhmg9xhysgpwzwzda2kg75n3sai3isd0ghbnvsobkcft6bm5ad640klk7phdthyza7xnp6gibk2sisbupk2yecx3nt6784bbne2lhy75y5qok7c0k82t4s8vejoplfgsnkxpj0twltjdtw60isucw6s3j7b8ol1zyzsyu3veu4zu46vwav9y83oybhurc31n513xt75n9gkfohbwcxdu608ezc1fl6z6lq7l18narjsnw0kk1ij4vuayp9l2wkr9cixbqjt7p09o0qj7tgg6e1936uzfvcppwd946m6v7aar8tl6omntg64mz35m5ek7z3klg8uvlgbd461cs4bbpabdjn0ua1a24shd2pgokjkjb94kwex2iymo68vlj4m9iawz5b22lin9lx2f6zoxfxz2rpaa67e0b6x4rrdd8zk7dokitpptvt2qmcpkr2ee7pcu7zq06p6wojwd8irj2w6gwe9q0hv28ln81p5t8rkfd45lo9utq7sjdkmmu89t21m21fyw7eldqp290yjr4k9nuczm7qmsg78j4ve6clstfk0ldveppljyccymtpyw8rdbz7cx4n50hfd8hkf9ip039hc30jc1q4i8tjjdd4nkm0zl7lu6f7b5wq4e19cfrazvzrhumz3fywrij7u44sqob8qybktkoi52lwhny5walu6bvoubd72lj18i6phkod4csw1ds99c3654gzns4oywvawy5xa524n1ukl7aplvc5iwj96jtminy5djp82b9m2d6n6dr322t0tbs0gq0ly17timhmg4bfcsdp7bf97fqkdg9n1ax42aelyi9uokn7rm8us3cgw5bzh8q8f3rc9zgj747txevdsadqtfc6z9adztpwsatzvmzgrolyr2l46elgw0g4ow1ki2q0dvw7t494zk1hl9w99bc3vveqy31uwguwwco8xaeb1xqg3xkznbc33551uf5xvje286vsb8pcf3slndm3t1zg6akavwc7zkljyupruw3f0uiypeuptktvi086ulmca07h4ez3a8lhs8127927i8xv41nzf6rwpsgo12y7dm59n628ylxe3fhh5j9fei7hdpgmtjuaqhz5qa7cg4ekfx19670k6i6zkir2k5stole8lbmmt5ehp739sci8wpuqrab9jxfwzdh3lcphi2m7erzmm0p9kckwynh1uet56zmqjc1yogyp51sp8vd6bxallt50i54eg58060iuuyriohdey7w4cbidqlxjsfxz98culyr6oz6ctfqn5w6g76hwosgrcah0u1oxc0cgot1eku3u6uaz00br9y3q5flhuwjnxtk0bmgtmd020pj2qg2y3a0q0x9py8h2itjluxc0fnxbug3nhc6n2069zhphl8s4th0xmvztlgi36vk27brvkp023of566pememr5goajomjz6u9ly0xdv3g2a8pp6z4twtlskg98avhnu1cjf5tswj98bgp042c4jomjwqe4vsq51asd7u7zvzt8on1s4c6fnxf7vfzi6lom3yt66m300zmnrf8mihr32n9lmstgbqnhpqxklbpjyqq22bt694zti3pu4pqp2xjud86seajervpxr0lhnjt4nmlfjhup8t3x0o8o40o60mkx3k2g5jj4mlmca8etqrpl92q4qurr6ihoob7z8b3bpf0sb27idp5oq4jryhv1z8s7236sh2twsi9ekcob6avfihsmjhvut7p5k96ck574xrcpup7ht26dyyghsdx1zhf0czl85xccmz1b1wva9teeqmveorejhtttnr3eit48eixahc3gz76cvwzk76isea617y85bs3no8kyu8hpmq1g2vu40jjwd48xtz60qeol0v7vqwjf6omc69b0ayea8r77hlp40rdwvgmy3gk8mwomjspt0tsfy059jrg2g8hbxsi4o531bdu6yvolnb4il7088ybk9apemrstwf27anjnoi3s7rmd67wmv1ccm7s9yue8nqbfm0lx8oosilnw186il398b155uuwkju6tg175941z6i2szn5upoit8r5gzrlf6f7g34c33zjk5vj0ty0by4c9olvgawrchhv5fttv6wqv3gzmb2urr2jgt9gumscnon9nfxwo5mrnhtd56ilq8wj94euag7f6vt4645cci8h5ye3u2q7xn0mc6qgs6bkmkqg6bpuqn3ny4nrry2n7lmf797pnk9fq59a16rih9b59rlml12gwg9yl9uqnopmqmufeddsiq7be8hekws7ukkj2q76ju0ds8feyg5v4m1jhnllnipuk88bzjkkc8ka4t8lv05oby0jpscmsksqar3ag1phjhszecgo7kuwjqozolxgim6rxev6t6vhp9j3lmm2qhnj9pw7lm4n7653sy25wm94so4t4jxi0jtubv8lbtaduxs7aal7a10fh2m1lwhsttjn0303tfpdtui9ej6w70wywqgu0ze0rhc9t4ts9hh2tite95cbdchgkpqjiy341lvq47cbi17qtb3dphfybd4kv3em458cscl4plx6egp1zccmr5xayhh0vtdsbgaj7i3q90mk0qcek8a855inriz3h3873eky6dibkjro4dsf60gcum6ax74kmwqmyadd12s3ntldaq2i9hesy5mfttdivziuh2t2l8ph6bqo1et31730mcwtsgifyjg01n0zdrllxzlgdp8jbcc2kx9g16qkbr907ypvdujm5hj1dnqtwnfsk4cms55vnufvqrsev6uu6jrwy6kp8qwvcky59o8eamhbbksqz7zcjpyboj4hy67f2aoot8z2covuy0k3o4fuplxrmo2cicwyhyuxsrkd655vifa3nltfstvha0p98bo0tdlz499d18kj0dpx73jor5tm9r18sjymzthv40v5ztj6dt5ghv8096hwysfzi1zr4x8j1sjbkz56zcpfe0mrcjydlshujsw9suxtx63asqsfdgygvlcvi6o6q8kglxrpbpaagjzq311tqzdalrjxm2lsyyj3qg85nxcx8ei9whr5vexl3ye8n1jj36vx9nbs9339m7d547cb9avdacf70njcqptx4f7xvcp9n9eiquozogfarzbmv147jezotjqdg2q60sn0u4rkmln39y6dw1zvuec5ybmdwkd68wtxo9dpnfi59hmg06x10gfvoyap1d0b1tzvk16eepor0j1l9st5i8acmzn0arfp390pb73kpk9udjp4kfxta2wq5red98yyakvpkidbfqd260yeksda093dmjlq2kh5iczjyuzji79wjbgt3btf7y70kii4s8m0a08ld8llp3haurdgnt0ko0sj953zomckh878foquwl76wuarzhggfdtlpltbv6v9njuty1h9etgkoyzgti42y9b5zritp2vi0eenwgiyjwx8gsedfdx0zz9g92d3rzd5lwduo3mnmqfyjq14htdbypjq5wdopyks4oyq3dxsoi8sz4bm93m5k8zs9yywq9jjg3x3d9397uj9zwntgfjpsj2z8utxkej6bld6szf3tpqplodzued4jd5nsxwsyzi8ra9kogsrg831ltjoeaeew5cps0poj7s95yqk7vlc2qetsktbxb74978cuqaet37sowcaymbfi7o3gs7sm39597qukcmdvyzjmc5jl0dhukjg70kqu5ge5wlda40oave3sku4rpskmo00buf 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:18:25.245 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:18:25.245 [2024-11-20 07:15:49.299173] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:25.245 [2024-11-20 07:15:49.299246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59189 ] 00:18:25.245 { 00:18:25.245 "subsystems": [ 00:18:25.245 { 00:18:25.245 "subsystem": "bdev", 00:18:25.245 "config": [ 00:18:25.245 { 00:18:25.245 "params": { 00:18:25.245 "trtype": "pcie", 00:18:25.245 "traddr": "0000:00:10.0", 00:18:25.245 "name": "Nvme0" 00:18:25.245 }, 00:18:25.245 "method": "bdev_nvme_attach_controller" 00:18:25.245 }, 00:18:25.245 { 00:18:25.245 "method": "bdev_wait_for_examine" 00:18:25.245 } 00:18:25.245 ] 00:18:25.245 } 00:18:25.245 ] 00:18:25.245 } 00:18:25.245 [2024-11-20 07:15:49.430088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.571 [2024-11-20 07:15:49.465079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.571 [2024-11-20 07:15:49.494906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:25.571  [2024-11-20T07:15:49.774Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:18:25.571 00:18:25.571 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:18:25.571 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:18:25.571 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:18:25.571 07:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:18:25.571 [2024-11-20 07:15:49.729742] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:25.571 [2024-11-20 07:15:49.729822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59197 ] 00:18:25.571 { 00:18:25.571 "subsystems": [ 00:18:25.571 { 00:18:25.571 "subsystem": "bdev", 00:18:25.571 "config": [ 00:18:25.571 { 00:18:25.572 "params": { 00:18:25.572 "trtype": "pcie", 00:18:25.572 "traddr": "0000:00:10.0", 00:18:25.572 "name": "Nvme0" 00:18:25.572 }, 00:18:25.572 "method": "bdev_nvme_attach_controller" 00:18:25.572 }, 00:18:25.572 { 00:18:25.572 "method": "bdev_wait_for_examine" 00:18:25.572 } 00:18:25.572 ] 00:18:25.572 } 00:18:25.572 ] 00:18:25.572 } 00:18:25.830 [2024-11-20 07:15:49.867353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.830 [2024-11-20 07:15:49.902450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.830 [2024-11-20 07:15:49.932692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:25.830  [2024-11-20T07:15:50.292Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:18:26.089 00:18:26.089 07:15:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:18:26.089 ************************************ 00:18:26.089 END TEST dd_rw_offset 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 38g7a2bpnyyrvwbn7ll7ck4qdo4k95d6nqawlwyci08041au5wjmdw527clkufsjc1hrkq5tvkl7r9kel8pcdiff2p4r0yk9gzfrfo2o5bcjwxlqek8si3042od66q13aijlnerlpwo40hls1e8vfm3mnry3oz7zk0ts5d9qbgfqqwj4b8nuij5b8pflmzvzwwp362c8dnlqgt369qhdiw8ileknfksle75s24hz0eaovfl53p4zsj4puek3wzjkpuwph8vhx9p621yo84t9rau5pj5u8twit3tc71fznd60wc8byy3qgvyim5sj33gr5czx0h0aj0u3t6il4h9txjwn6bczhmg9xhysgpwzwzda2kg75n3sai3isd0ghbnvsobkcft6bm5ad640klk7phdthyza7xnp6gibk2sisbupk2yecx3nt6784bbne2lhy75y5qok7c0k82t4s8vejoplfgsnkxpj0twltjdtw60isucw6s3j7b8ol1zyzsyu3veu4zu46vwav9y83oybhurc31n513xt75n9gkfohbwcxdu608ezc1fl6z6lq7l18narjsnw0kk1ij4vuayp9l2wkr9cixbqjt7p09o0qj7tgg6e1936uzfvcppwd946m6v7aar8tl6omntg64mz35m5ek7z3klg8uvlgbd461cs4bbpabdjn0ua1a24shd2pgokjkjb94kwex2iymo68vlj4m9iawz5b22lin9lx2f6zoxfxz2rpaa67e0b6x4rrdd8zk7dokitpptvt2qmcpkr2ee7pcu7zq06p6wojwd8irj2w6gwe9q0hv28ln81p5t8rkfd45lo9utq7sjdkmmu89t21m21fyw7eldqp290yjr4k9nuczm7qmsg78j4ve6clstfk0ldveppljyccymtpyw8rdbz7cx4n50hfd8hkf9ip039hc30jc1q4i8tjjdd4nkm0zl7lu6f7b5wq4e19cfrazvzrhumz3fywrij7u44sqob8qybktkoi52lwhny5walu6bvoubd72lj18i6phkod4csw1ds99c3654gzns4oywvawy5xa524n1ukl7aplvc5iwj96jtminy5djp82b9m2d6n6dr322t0tbs0gq0ly17timhmg4bfcsdp7bf97fqkdg9n1ax42aelyi9uokn7rm8us3cgw5bzh8q8f3rc9zgj747txevdsadqtfc6z9adztpwsatzvmzgrolyr2l46elgw0g4ow1ki2q0dvw7t494zk1hl9w99bc3vveqy31uwguwwco8xaeb1xqg3xkznbc33551uf5xvje286vsb8pcf3slndm3t1zg6akavwc7zkljyupruw3f0uiypeuptktvi086ulmca07h4ez3a8lhs8127927i8xv41nzf6rwpsgo12y7dm59n628ylxe3fhh5j9fei7hdpgmtjuaqhz5qa7cg4ekfx19670k6i6zkir2k5stole8lbmmt5ehp739sci8wpuqrab9jxfwzdh3lcphi2m7erzmm0p9kckwynh1uet56zmqjc1yogyp51sp8vd6bxallt50i54eg58060iuuyriohdey7w4cbidqlxjsfxz98culyr6oz6ctfqn5w6g76hwosgrcah0u1oxc0cgot1eku3u6uaz00br9y3q5flhuwjnxtk0bmgtmd020pj2qg2y3a0q0x9py8h2itjluxc0fnxbug3nhc6n2069zhphl8s4th0xmvztlgi36vk27brvkp023of566pememr5goajomjz6u9ly0xdv3g2a8pp6z4twtlskg98avhnu1cjf5tswj98bgp042c4jomjwqe4vsq51asd7u7zvzt8on1s4c6fnxf7vfzi6lom3yt66m300zmnrf8mihr32n9lmstgbqnhpqxklbpjyqq22bt694zti3pu4pqp2xjud86seajervpxr0lhnjt4nmlfjhup8t3x0o8o40o60mkx3k2g5jj4mlmca8etqrpl92q4qurr6ihoob7z8b3bpf0sb27idp5oq4jryhv1z8s7236sh2twsi9ekcob6avfihsmjhvut7p5k96ck574xrcpup7ht26dyyghsdx1zhf0czl85xccmz1b1wva9teeqmveorejhtttnr3eit48eixahc3gz76cvwzk76isea617y85bs3no8kyu8hpmq1g2vu40jjwd48xtz60qeol0v7vqwjf6omc69b0ayea8r77hlp40rdwvgmy3gk8mwomjspt0tsfy059jrg2g8hbxsi4o531bdu6yvolnb4il7088ybk9apemrstwf27anjnoi3s7rmd67wmv1ccm7s9yue8nqbfm0lx8oosilnw186il398b155uuwkju6tg175941z6i2szn5upoit8r5gzrlf6f7g34c33zjk5vj0ty0by4c9olvgawrchhv5fttv6wqv3gzmb2urr2jgt9gumscnon9nfxwo5mrnhtd56ilq8wj94euag7f6vt4645cci8h5ye3u2q7xn0mc6qgs6bkmkqg6bpuqn3ny4nrry2n7lmf797pnk9fq59a16rih9b59rlml12gwg9yl9uqnopmqmufeddsiq7be8hekws7ukkj2q76ju0ds8feyg5v4m1jhnllnipuk88bzjkkc8ka4t8lv05oby0jpscmsksqar3ag1phjhszecgo7kuwjqozolxgim6rxev6t6vhp9j3lmm2qhnj9pw7lm4n7653sy25wm94so4t4jxi0jtubv8lbtaduxs7aal7a10fh2m1lwhsttjn0303tfpdtui9ej6w70wywqgu0ze0rhc9t4ts9hh2tite95cbdchgkpqjiy341lvq47cbi17qtb3dphfybd4kv3em458cscl4plx6egp1zccmr5xayhh0vtdsbgaj7i3q90mk0qcek8a855inriz3h3873eky6dibkjro4dsf60gcum6ax74kmwqmyadd12s3ntldaq2i9hesy5mfttdivziuh2t2l8ph6bqo1et31730mcwtsgifyjg01n0zdrllxzlgdp8jbcc2kx9g16qkbr907ypvdujm5hj1dnqtwnfsk4cms55vnufvqrsev6uu6jrwy6kp8qwvcky59o8eamhbbksqz7zcjpyboj4hy67f2aoot8z2covuy0k3o4fuplxrmo2cicwyhyuxsrkd655vifa3nltfstvha0p98bo0tdlz499d18kj0dpx73jor5tm9r18sjymzthv40v5ztj6dt5ghv8096hwysfzi1zr4x8j1sjbkz56zcpfe0mrcjydlshujsw9suxtx63asqsfdgygvlcvi6o6q8kglxrpbpaagjzq311tqzdalrjxm2lsyyj3qg85nxcx8ei9whr5vexl3ye8n1jj36vx9nbs9339m7d547cb9avdacf70njcqptx4f7xvcp9n9eiquozogfarzbmv147jezotjqdg2q60sn0u4rkmln39y6dw1zvuec5ybmdwkd68wtxo9dpnfi59hmg06x10gfvoyap1d0b1tzvk16eepor0j1l9st5i8acmzn0arfp390pb73kpk9udjp4kfxta2wq5red98yyakvpkidbfqd260yeksda093dmjlq2kh5iczjyuzji79wjbgt3btf7y70kii4s8m0a08ld8llp3haurdgnt0ko0sj953zomckh878foquwl76wuarzhggfdtlpltbv6v9njuty1h9etgkoyzgti42y9b5zritp2vi0eenwgiyjwx8gsedfdx0zz9g92d3rzd5lwduo3mnmqfyjq14htdbypjq5wdopyks4oyq3dxsoi8sz4bm93m5k8zs9yywq9jjg3x3d9397uj9zwntgfjpsj2z8utxkej6bld6szf3tpqplodzued4jd5nsxwsyzi8ra9kogsrg831ltjoeaeew5cps0poj7s95yqk7vlc2qetsktbxb74978cuqaet37sowcaymbfi7o3gs7sm39597qukcmdvyzjmc5jl0dhukjg70kqu5ge5wlda40oave3sku4rpskmo00buf == \3\8\g\7\a\2\b\p\n\y\y\r\v\w\b\n\7\l\l\7\c\k\4\q\d\o\4\k\9\5\d\6\n\q\a\w\l\w\y\c\i\0\8\0\4\1\a\u\5\w\j\m\d\w\5\2\7\c\l\k\u\f\s\j\c\1\h\r\k\q\5\t\v\k\l\7\r\9\k\e\l\8\p\c\d\i\f\f\2\p\4\r\0\y\k\9\g\z\f\r\f\o\2\o\5\b\c\j\w\x\l\q\e\k\8\s\i\3\0\4\2\o\d\6\6\q\1\3\a\i\j\l\n\e\r\l\p\w\o\4\0\h\l\s\1\e\8\v\f\m\3\m\n\r\y\3\o\z\7\z\k\0\t\s\5\d\9\q\b\g\f\q\q\w\j\4\b\8\n\u\i\j\5\b\8\p\f\l\m\z\v\z\w\w\p\3\6\2\c\8\d\n\l\q\g\t\3\6\9\q\h\d\i\w\8\i\l\e\k\n\f\k\s\l\e\7\5\s\2\4\h\z\0\e\a\o\v\f\l\5\3\p\4\z\s\j\4\p\u\e\k\3\w\z\j\k\p\u\w\p\h\8\v\h\x\9\p\6\2\1\y\o\8\4\t\9\r\a\u\5\p\j\5\u\8\t\w\i\t\3\t\c\7\1\f\z\n\d\6\0\w\c\8\b\y\y\3\q\g\v\y\i\m\5\s\j\3\3\g\r\5\c\z\x\0\h\0\a\j\0\u\3\t\6\i\l\4\h\9\t\x\j\w\n\6\b\c\z\h\m\g\9\x\h\y\s\g\p\w\z\w\z\d\a\2\k\g\7\5\n\3\s\a\i\3\i\s\d\0\g\h\b\n\v\s\o\b\k\c\f\t\6\b\m\5\a\d\6\4\0\k\l\k\7\p\h\d\t\h\y\z\a\7\x\n\p\6\g\i\b\k\2\s\i\s\b\u\p\k\2\y\e\c\x\3\n\t\6\7\8\4\b\b\n\e\2\l\h\y\7\5\y\5\q\o\k\7\c\0\k\8\2\t\4\s\8\v\e\j\o\p\l\f\g\s\n\k\x\p\j\0\t\w\l\t\j\d\t\w\6\0\i\s\u\c\w\6\s\3\j\7\b\8\o\l\1\z\y\z\s\y\u\3\v\e\u\4\z\u\4\6\v\w\a\v\9\y\8\3\o\y\b\h\u\r\c\3\1\n\5\1\3\x\t\7\5\n\9\g\k\f\o\h\b\w\c\x\d\u\6\0\8\e\z\c\1\f\l\6\z\6\l\q\7\l\1\8\n\a\r\j\s\n\w\0\k\k\1\i\j\4\v\u\a\y\p\9\l\2\w\k\r\9\c\i\x\b\q\j\t\7\p\0\9\o\0\q\j\7\t\g\g\6\e\1\9\3\6\u\z\f\v\c\p\p\w\d\9\4\6\m\6\v\7\a\a\r\8\t\l\6\o\m\n\t\g\6\4\m\z\3\5\m\5\e\k\7\z\3\k\l\g\8\u\v\l\g\b\d\4\6\1\c\s\4\b\b\p\a\b\d\j\n\0\u\a\1\a\2\4\s\h\d\2\p\g\o\k\j\k\j\b\9\4\k\w\e\x\2\i\y\m\o\6\8\v\l\j\4\m\9\i\a\w\z\5\b\2\2\l\i\n\9\l\x\2\f\6\z\o\x\f\x\z\2\r\p\a\a\6\7\e\0\b\6\x\4\r\r\d\d\8\z\k\7\d\o\k\i\t\p\p\t\v\t\2\q\m\c\p\k\r\2\e\e\7\p\c\u\7\z\q\0\6\p\6\w\o\j\w\d\8\i\r\j\2\w\6\g\w\e\9\q\0\h\v\2\8\l\n\8\1\p\5\t\8\r\k\f\d\4\5\l\o\9\u\t\q\7\s\j\d\k\m\m\u\8\9\t\2\1\m\2\1\f\y\w\7\e\l\d\q\p\2\9\0\y\j\r\4\k\9\n\u\c\z\m\7\q\m\s\g\7\8\j\4\v\e\6\c\l\s\t\f\k\0\l\d\v\e\p\p\l\j\y\c\c\y\m\t\p\y\w\8\r\d\b\z\7\c\x\4\n\5\0\h\f\d\8\h\k\f\9\i\p\0\3\9\h\c\3\0\j\c\1\q\4\i\8\t\j\j\d\d\4\n\k\m\0\z\l\7\l\u\6\f\7\b\5\w\q\4\e\1\9\c\f\r\a\z\v\z\r\h\u\m\z\3\f\y\w\r\i\j\7\u\4\4\s\q\o\b\8\q\y\b\k\t\k\o\i\5\2\l\w\h\n\y\5\w\a\l\u\6\b\v\o\u\b\d\7\2\l\j\1\8\i\6\p\h\k\o\d\4\c\s\w\1\d\s\9\9\c\3\6\5\4\g\z\n\s\4\o\y\w\v\a\w\y\5\x\a\5\2\4\n\1\u\k\l\7\a\p\l\v\c\5\i\w\j\9\6\j\t\m\i\n\y\5\d\j\p\8\2\b\9\m\2\d\6\n\6\d\r\3\2\2\t\0\t\b\s\0\g\q\0\l\y\1\7\t\i\m\h\m\g\4\b\f\c\s\d\p\7\b\f\9\7\f\q\k\d\g\9\n\1\a\x\4\2\a\e\l\y\i\9\u\o\k\n\7\r\m\8\u\s\3\c\g\w\5\b\z\h\8\q\8\f\3\r\c\9\z\g\j\7\4\7\t\x\e\v\d\s\a\d\q\t\f\c\6\z\9\a\d\z\t\p\w\s\a\t\z\v\m\z\g\r\o\l\y\r\2\l\4\6\e\l\g\w\0\g\4\o\w\1\k\i\2\q\0\d\v\w\7\t\4\9\4\z\k\1\h\l\9\w\9\9\b\c\3\v\v\e\q\y\3\1\u\w\g\u\w\w\c\o\8\x\a\e\b\1\x\q\g\3\x\k\z\n\b\c\3\3\5\5\1\u\f\5\x\v\j\e\2\8\6\v\s\b\8\p\c\f\3\s\l\n\d\m\3\t\1\z\g\6\a\k\a\v\w\c\7\z\k\l\j\y\u\p\r\u\w\3\f\0\u\i\y\p\e\u\p\t\k\t\v\i\0\8\6\u\l\m\c\a\0\7\h\4\e\z\3\a\8\l\h\s\8\1\2\7\9\2\7\i\8\x\v\4\1\n\z\f\6\r\w\p\s\g\o\1\2\y\7\d\m\5\9\n\6\2\8\y\l\x\e\3\f\h\h\5\j\9\f\e\i\7\h\d\p\g\m\t\j\u\a\q\h\z\5\q\a\7\c\g\4\e\k\f\x\1\9\6\7\0\k\6\i\6\z\k\i\r\2\k\5\s\t\o\l\e\8\l\b\m\m\t\5\e\h\p\7\3\9\s\c\i\8\w\p\u\q\r\a\b\9\j\x\f\w\z\d\h\3\l\c\p\h\i\2\m\7\e\r\z\m\m\0\p\9\k\c\k\w\y\n\h\1\u\e\t\5\6\z\m\q\j\c\1\y\o\g\y\p\5\1\s\p\8\v\d\6\b\x\a\l\l\t\5\0\i\5\4\e\g\5\8\0\6\0\i\u\u\y\r\i\o\h\d\e\y\7\w\4\c\b\i\d\q\l\x\j\s\f\x\z\9\8\c\u\l\y\r\6\o\z\6\c\t\f\q\n\5\w\6\g\7\6\h\w\o\s\g\r\c\a\h\0\u\1\o\x\c\0\c\g\o\t\1\e\k\u\3\u\6\u\a\z\0\0\b\r\9\y\3\q\5\f\l\h\u\w\j\n\x\t\k\0\b\m\g\t\m\d\0\2\0\p\j\2\q\g\2\y\3\a\0\q\0\x\9\p\y\8\h\2\i\t\j\l\u\x\c\0\f\n\x\b\u\g\3\n\h\c\6\n\2\0\6\9\z\h\p\h\l\8\s\4\t\h\0\x\m\v\z\t\l\g\i\3\6\v\k\2\7\b\r\v\k\p\0\2\3\o\f\5\6\6\p\e\m\e\m\r\5\g\o\a\j\o\m\j\z\6\u\9\l\y\0\x\d\v\3\g\2\a\8\p\p\6\z\4\t\w\t\l\s\k\g\9\8\a\v\h\n\u\1\c\j\f\5\t\s\w\j\9\8\b\g\p\0\4\2\c\4\j\o\m\j\w\q\e\4\v\s\q\5\1\a\s\d\7\u\7\z\v\z\t\8\o\n\1\s\4\c\6\f\n\x\f\7\v\f\z\i\6\l\o\m\3\y\t\6\6\m\3\0\0\z\m\n\r\f\8\m\i\h\r\3\2\n\9\l\m\s\t\g\b\q\n\h\p\q\x\k\l\b\p\j\y\q\q\2\2\b\t\6\9\4\z\t\i\3\p\u\4\p\q\p\2\x\j\u\d\8\6\s\e\a\j\e\r\v\p\x\r\0\l\h\n\j\t\4\n\m\l\f\j\h\u\p\8\t\3\x\0\o\8\o\4\0\o\6\0\m\k\x\3\k\2\g\5\j\j\4\m\l\m\c\a\8\e\t\q\r\p\l\9\2\q\4\q\u\r\r\6\i\h\o\o\b\7\z\8\b\3\b\p\f\0\s\b\2\7\i\d\p\5\o\q\4\j\r\y\h\v\1\z\8\s\7\2\3\6\s\h\2\t\w\s\i\9\e\k\c\o\b\6\a\v\f\i\h\s\m\j\h\v\u\t\7\p\5\k\9\6\c\k\5\7\4\x\r\c\p\u\p\7\h\t\2\6\d\y\y\g\h\s\d\x\1\z\h\f\0\c\z\l\8\5\x\c\c\m\z\1\b\1\w\v\a\9\t\e\e\q\m\v\e\o\r\e\j\h\t\t\t\n\r\3\e\i\t\4\8\e\i\x\a\h\c\3\g\z\7\6\c\v\w\z\k\7\6\i\s\e\a\6\1\7\y\8\5\b\s\3\n\o\8\k\y\u\8\h\p\m\q\1\g\2\v\u\4\0\j\j\w\d\4\8\x\t\z\6\0\q\e\o\l\0\v\7\v\q\w\j\f\6\o\m\c\6\9\b\0\a\y\e\a\8\r\7\7\h\l\p\4\0\r\d\w\v\g\m\y\3\g\k\8\m\w\o\m\j\s\p\t\0\t\s\f\y\0\5\9\j\r\g\2\g\8\h\b\x\s\i\4\o\5\3\1\b\d\u\6\y\v\o\l\n\b\4\i\l\7\0\8\8\y\b\k\9\a\p\e\m\r\s\t\w\f\2\7\a\n\j\n\o\i\3\s\7\r\m\d\6\7\w\m\v\1\c\c\m\7\s\9\y\u\e\8\n\q\b\f\m\0\l\x\8\o\o\s\i\l\n\w\1\8\6\i\l\3\9\8\b\1\5\5\u\u\w\k\j\u\6\t\g\1\7\5\9\4\1\z\6\i\2\s\z\n\5\u\p\o\i\t\8\r\5\g\z\r\l\f\6\f\7\g\3\4\c\3\3\z\j\k\5\v\j\0\t\y\0\b\y\4\c\9\o\l\v\g\a\w\r\c\h\h\v\5\f\t\t\v\6\w\q\v\3\g\z\m\b\2\u\r\r\2\j\g\t\9\g\u\m\s\c\n\o\n\9\n\f\x\w\o\5\m\r\n\h\t\d\5\6\i\l\q\8\w\j\9\4\e\u\a\g\7\f\6\v\t\4\6\4\5\c\c\i\8\h\5\y\e\3\u\2\q\7\x\n\0\m\c\6\q\g\s\6\b\k\m\k\q\g\6\b\p\u\q\n\3\n\y\4\n\r\r\y\2\n\7\l\m\f\7\9\7\p\n\k\9\f\q\5\9\a\1\6\r\i\h\9\b\5\9\r\l\m\l\1\2\g\w\g\9\y\l\9\u\q\n\o\p\m\q\m\u\f\e\d\d\s\i\q\7\b\e\8\h\e\k\w\s\7\u\k\k\j\2\q\7\6\j\u\0\d\s\8\f\e\y\g\5\v\4\m\1\j\h\n\l\l\n\i\p\u\k\8\8\b\z\j\k\k\c\8\k\a\4\t\8\l\v\0\5\o\b\y\0\j\p\s\c\m\s\k\s\q\a\r\3\a\g\1\p\h\j\h\s\z\e\c\g\o\7\k\u\w\j\q\o\z\o\l\x\g\i\m\6\r\x\e\v\6\t\6\v\h\p\9\j\3\l\m\m\2\q\h\n\j\9\p\w\7\l\m\4\n\7\6\5\3\s\y\2\5\w\m\9\4\s\o\4\t\4\j\x\i\0\j\t\u\b\v\8\l\b\t\a\d\u\x\s\7\a\a\l\7\a\1\0\f\h\2\m\1\l\w\h\s\t\t\j\n\0\3\0\3\t\f\p\d\t\u\i\9\e\j\6\w\7\0\w\y\w\q\g\u\0\z\e\0\r\h\c\9\t\4\t\s\9\h\h\2\t\i\t\e\9\5\c\b\d\c\h\g\k\p\q\j\i\y\3\4\1\l\v\q\4\7\c\b\i\1\7\q\t\b\3\d\p\h\f\y\b\d\4\k\v\3\e\m\4\5\8\c\s\c\l\4\p\l\x\6\e\g\p\1\z\c\c\m\r\5\x\a\y\h\h\0\v\t\d\s\b\g\a\j\7\i\3\q\9\0\m\k\0\q\c\e\k\8\a\8\5\5\i\n\r\i\z\3\h\3\8\7\3\e\k\y\6\d\i\b\k\j\r\o\4\d\s\f\6\0\g\c\u\m\6\a\x\7\4\k\m\w\q\m\y\a\d\d\1\2\s\3\n\t\l\d\a\q\2\i\9\h\e\s\y\5\m\f\t\t\d\i\v\z\i\u\h\2\t\2\l\8\p\h\6\b\q\o\1\e\t\3\1\7\3\0\m\c\w\t\s\g\i\f\y\j\g\0\1\n\0\z\d\r\l\l\x\z\l\g\d\p\8\j\b\c\c\2\k\x\9\g\1\6\q\k\b\r\9\0\7\y\p\v\d\u\j\m\5\h\j\1\d\n\q\t\w\n\f\s\k\4\c\m\s\5\5\v\n\u\f\v\q\r\s\e\v\6\u\u\6\j\r\w\y\6\k\p\8\q\w\v\c\k\y\5\9\o\8\e\a\m\h\b\b\k\s\q\z\7\z\c\j\p\y\b\o\j\4\h\y\6\7\f\2\a\o\o\t\8\z\2\c\o\v\u\y\0\k\3\o\4\f\u\p\l\x\r\m\o\2\c\i\c\w\y\h\y\u\x\s\r\k\d\6\5\5\v\i\f\a\3\n\l\t\f\s\t\v\h\a\0\p\9\8\b\o\0\t\d\l\z\4\9\9\d\1\8\k\j\0\d\p\x\7\3\j\o\r\5\t\m\9\r\1\8\s\j\y\m\z\t\h\v\4\0\v\5\z\t\j\6\d\t\5\g\h\v\8\0\9\6\h\w\y\s\f\z\i\1\z\r\4\x\8\j\1\s\j\b\k\z\5\6\z\c\p\f\e\0\m\r\c\j\y\d\l\s\h\u\j\s\w\9\s\u\x\t\x\6\3\a\s\q\s\f\d\g\y\g\v\l\c\v\i\6\o\6\q\8\k\g\l\x\r\p\b\p\a\a\g\j\z\q\3\1\1\t\q\z\d\a\l\r\j\x\m\2\l\s\y\y\j\3\q\g\8\5\n\x\c\x\8\e\i\9\w\h\r\5\v\e\x\l\3\y\e\8\n\1\j\j\3\6\v\x\9\n\b\s\9\3\3\9\m\7\d\5\4\7\c\b\9\a\v\d\a\c\f\7\0\n\j\c\q\p\t\x\4\f\7\x\v\c\p\9\n\9\e\i\q\u\o\z\o\g\f\a\r\z\b\m\v\1\4\7\j\e\z\o\t\j\q\d\g\2\q\6\0\s\n\0\u\4\r\k\m\l\n\3\9\y\6\d\w\1\z\v\u\e\c\5\y\b\m\d\w\k\d\6\8\w\t\x\o\9\d\p\n\f\i\5\9\h\m\g\0\6\x\1\0\g\f\v\o\y\a\p\1\d\0\b\1\t\z\v\k\1\6\e\e\p\o\r\0\j\1\l\9\s\t\5\i\8\a\c\m\z\n\0\a\r\f\p\3\9\0\p\b\7\3\k\p\k\9\u\d\j\p\4\k\f\x\t\a\2\w\q\5\r\e\d\9\8\y\y\a\k\v\p\k\i\d\b\f\q\d\2\6\0\y\e\k\s\d\a\0\9\3\d\m\j\l\q\2\k\h\5\i\c\z\j\y\u\z\j\i\7\9\w\j\b\g\t\3\b\t\f\7\y\7\0\k\i\i\4\s\8\m\0\a\0\8\l\d\8\l\l\p\3\h\a\u\r\d\g\n\t\0\k\o\0\s\j\9\5\3\z\o\m\c\k\h\8\7\8\f\o\q\u\w\l\7\6\w\u\a\r\z\h\g\g\f\d\t\l\p\l\t\b\v\6\v\9\n\j\u\t\y\1\h\9\e\t\g\k\o\y\z\g\t\i\4\2\y\9\b\5\z\r\i\t\p\2\v\i\0\e\e\n\w\g\i\y\j\w\x\8\g\s\e\d\f\d\x\0\z\z\9\g\9\2\d\3\r\z\d\5\l\w\d\u\o\3\m\n\m\q\f\y\j\q\1\4\h\t\d\b\y\p\j\q\5\w\d\o\p\y\k\s\4\o\y\q\3\d\x\s\o\i\8\s\z\4\b\m\9\3\m\5\k\8\z\s\9\y\y\w\q\9\j\j\g\3\x\3\d\9\3\9\7\u\j\9\z\w\n\t\g\f\j\p\s\j\2\z\8\u\t\x\k\e\j\6\b\l\d\6\s\z\f\3\t\p\q\p\l\o\d\z\u\e\d\4\j\d\5\n\s\x\w\s\y\z\i\8\r\a\9\k\o\g\s\r\g\8\3\1\l\t\j\o\e\a\e\e\w\5\c\p\s\0\p\o\j\7\s\9\5\y\q\k\7\v\l\c\2\q\e\t\s\k\t\b\x\b\7\4\9\7\8\c\u\q\a\e\t\3\7\s\o\w\c\a\y\m\b\f\i\7\o\3\g\s\7\s\m\3\9\5\9\7\q\u\k\c\m\d\v\y\z\j\m\c\5\j\l\0\d\h\u\k\j\g\7\0\k\q\u\5\g\e\5\w\l\d\a\4\0\o\a\v\e\3\s\k\u\4\r\p\s\k\m\o\0\0\b\u\f ]] 00:18:26.090 00:18:26.090 real 0m0.902s 00:18:26.090 user 0m0.593s 00:18:26.090 sys 0m0.342s 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:18:26.090 ************************************ 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:26.090 07:15:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:18:26.090 [2024-11-20 07:15:50.197902] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:26.090 [2024-11-20 07:15:50.197962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59227 ] 00:18:26.090 { 00:18:26.090 "subsystems": [ 00:18:26.090 { 00:18:26.090 "subsystem": "bdev", 00:18:26.090 "config": [ 00:18:26.090 { 00:18:26.090 "params": { 00:18:26.090 "trtype": "pcie", 00:18:26.090 "traddr": "0000:00:10.0", 00:18:26.090 "name": "Nvme0" 00:18:26.090 }, 00:18:26.090 "method": "bdev_nvme_attach_controller" 00:18:26.090 }, 00:18:26.090 { 00:18:26.090 "method": "bdev_wait_for_examine" 00:18:26.090 } 00:18:26.090 ] 00:18:26.090 } 00:18:26.090 ] 00:18:26.090 } 00:18:26.446 [2024-11-20 07:15:50.337398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.446 [2024-11-20 07:15:50.371394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.446 [2024-11-20 07:15:50.400967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:26.446  [2024-11-20T07:15:50.649Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:26.446 00:18:26.446 07:15:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:26.446 00:18:26.446 real 0m12.474s 00:18:26.446 user 0m8.544s 00:18:26.446 sys 0m3.976s 00:18:26.446 ************************************ 00:18:26.446 END TEST spdk_dd_basic_rw 00:18:26.446 ************************************ 00:18:26.446 07:15:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.446 07:15:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:18:26.724 07:15:50 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:18:26.724 07:15:50 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:26.724 07:15:50 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.724 07:15:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:18:26.724 ************************************ 00:18:26.724 START TEST spdk_dd_posix 00:18:26.724 ************************************ 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:18:26.724 * Looking for test storage... 00:18:26.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:26.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.724 --rc genhtml_branch_coverage=1 00:18:26.724 --rc genhtml_function_coverage=1 00:18:26.724 --rc genhtml_legend=1 00:18:26.724 --rc geninfo_all_blocks=1 00:18:26.724 --rc geninfo_unexecuted_blocks=1 00:18:26.724 00:18:26.724 ' 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:26.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.724 --rc genhtml_branch_coverage=1 00:18:26.724 --rc genhtml_function_coverage=1 00:18:26.724 --rc genhtml_legend=1 00:18:26.724 --rc geninfo_all_blocks=1 00:18:26.724 --rc geninfo_unexecuted_blocks=1 00:18:26.724 00:18:26.724 ' 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:26.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.724 --rc genhtml_branch_coverage=1 00:18:26.724 --rc genhtml_function_coverage=1 00:18:26.724 --rc genhtml_legend=1 00:18:26.724 --rc geninfo_all_blocks=1 00:18:26.724 --rc geninfo_unexecuted_blocks=1 00:18:26.724 00:18:26.724 ' 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:26.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.724 --rc genhtml_branch_coverage=1 00:18:26.724 --rc genhtml_function_coverage=1 00:18:26.724 --rc genhtml_legend=1 00:18:26.724 --rc geninfo_all_blocks=1 00:18:26.724 --rc geninfo_unexecuted_blocks=1 00:18:26.724 00:18:26.724 ' 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.724 07:15:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:18:26.725 * First test run, liburing in use 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:26.725 ************************************ 00:18:26.725 START TEST dd_flag_append 00:18:26.725 ************************************ 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=vhmssvuk91yx7baz5nun3qjbl8dxwm68 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=7uzeflpn1081e0f5qhmaxv8vow7cpndt 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s vhmssvuk91yx7baz5nun3qjbl8dxwm68 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 7uzeflpn1081e0f5qhmaxv8vow7cpndt 00:18:26.725 07:15:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:18:26.725 [2024-11-20 07:15:50.828490] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:26.725 [2024-11-20 07:15:50.828546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59293 ] 00:18:26.983 [2024-11-20 07:15:50.960335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.983 [2024-11-20 07:15:51.005078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.983 [2024-11-20 07:15:51.036274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:26.983  [2024-11-20T07:15:51.186Z] Copying: 32/32 [B] (average 31 kBps) 00:18:26.983 00:18:26.983 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 7uzeflpn1081e0f5qhmaxv8vow7cpndtvhmssvuk91yx7baz5nun3qjbl8dxwm68 == \7\u\z\e\f\l\p\n\1\0\8\1\e\0\f\5\q\h\m\a\x\v\8\v\o\w\7\c\p\n\d\t\v\h\m\s\s\v\u\k\9\1\y\x\7\b\a\z\5\n\u\n\3\q\j\b\l\8\d\x\w\m\6\8 ]] 00:18:26.983 00:18:26.983 real 0m0.365s 00:18:26.983 user 0m0.186s 00:18:26.983 sys 0m0.146s 00:18:26.983 ************************************ 00:18:26.983 END TEST dd_flag_append 00:18:26.984 ************************************ 00:18:26.984 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.984 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:27.242 ************************************ 00:18:27.242 START TEST dd_flag_directory 00:18:27.242 ************************************ 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:27.242 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:27.242 [2024-11-20 07:15:51.225511] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:27.242 [2024-11-20 07:15:51.225566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59322 ] 00:18:27.242 [2024-11-20 07:15:51.365238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.242 [2024-11-20 07:15:51.400146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.242 [2024-11-20 07:15:51.429645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:27.500 [2024-11-20 07:15:51.452673] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:27.500 [2024-11-20 07:15:51.452707] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:27.500 [2024-11-20 07:15:51.452717] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:27.500 [2024-11-20 07:15:51.507740] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:27.500 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:18:27.500 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.500 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:18:27.500 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:18:27.500 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:18:27.500 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.500 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:27.501 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:18:27.501 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:27.501 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.501 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.501 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.501 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.501 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.501 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.501 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.501 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:27.501 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:27.501 [2024-11-20 07:15:51.587358] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:27.501 [2024-11-20 07:15:51.587426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59331 ] 00:18:27.759 [2024-11-20 07:15:51.725592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.759 [2024-11-20 07:15:51.760853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.759 [2024-11-20 07:15:51.790393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:27.759 [2024-11-20 07:15:51.812213] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:27.759 [2024-11-20 07:15:51.812280] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:27.759 [2024-11-20 07:15:51.812291] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:27.759 [2024-11-20 07:15:51.867906] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.759 00:18:27.759 real 0m0.719s 00:18:27.759 user 0m0.346s 00:18:27.759 sys 0m0.166s 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:18:27.759 ************************************ 00:18:27.759 END TEST dd_flag_directory 00:18:27.759 ************************************ 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:27.759 ************************************ 00:18:27.759 START TEST dd_flag_nofollow 00:18:27.759 ************************************ 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:27.759 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:28.017 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:28.017 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:18:28.017 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:28.017 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:28.017 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.017 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:28.017 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.017 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:28.017 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.017 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:28.017 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:28.017 07:15:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:28.017 [2024-11-20 07:15:51.995097] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:28.017 [2024-11-20 07:15:51.995155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59354 ] 00:18:28.017 [2024-11-20 07:15:52.137538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.017 [2024-11-20 07:15:52.172678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.017 [2024-11-20 07:15:52.202535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:28.276 [2024-11-20 07:15:52.224906] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:28.276 [2024-11-20 07:15:52.224941] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:28.276 [2024-11-20 07:15:52.224951] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:28.276 [2024-11-20 07:15:52.280622] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:28.276 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:28.276 [2024-11-20 07:15:52.365908] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:28.276 [2024-11-20 07:15:52.365967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59369 ] 00:18:28.535 [2024-11-20 07:15:52.505177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.535 [2024-11-20 07:15:52.540001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.535 [2024-11-20 07:15:52.569667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:28.535 [2024-11-20 07:15:52.592092] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:28.535 [2024-11-20 07:15:52.592127] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:28.535 [2024-11-20 07:15:52.592138] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:28.535 [2024-11-20 07:15:52.647763] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:28.535 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:18:28.535 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.535 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:18:28.535 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:18:28.535 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:18:28.535 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.535 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:18:28.535 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:18:28.535 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:18:28.535 07:15:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:28.793 [2024-11-20 07:15:52.738206] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:28.793 [2024-11-20 07:15:52.738280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59371 ] 00:18:28.793 [2024-11-20 07:15:52.874982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.793 [2024-11-20 07:15:52.910590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.793 [2024-11-20 07:15:52.940588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:28.793  [2024-11-20T07:15:53.254Z] Copying: 512/512 [B] (average 500 kBps) 00:18:29.051 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ bqdd4es3vmx8pwhda87sgpmfltvdvpx7luzfgtngpwjwfch0uy6g0otzd1d8rnzoek6qubiatdal7scbx7mhmu3vy6qgihgux4cmp59bj2zrhtteb7bjnwwh8mugiihosuuyo2tky4ctvivtxzyu8p4jvh8vf56m1uew9r2tvavbls4yzgmto6ybwinn8vup0da7ozdtxn1x3fqj9loxdm5tnyr7lt6a3djqioljosxonwviok29ogqug57axgn69tw0y71j64mi6fqe61dhyt6rk5iff6qqtuhgqa5z42kiihnlwitq3bwpmxzo9k431p6bcpe5jtyh9dlzwl03dbn8hgy0q5t63ftr0m9rxm50l671xl1lzdk8hgji1lwntf6e26p28qz9yeglso3kso32pty0vbowidmqz61luinfkhe2cnetj7bnkpniiv2s7f1aynpjfic055rpveekih1vft56m0la6g0jgs2qp7l95op7hfm26w5hr0nycqxw == \b\q\d\d\4\e\s\3\v\m\x\8\p\w\h\d\a\8\7\s\g\p\m\f\l\t\v\d\v\p\x\7\l\u\z\f\g\t\n\g\p\w\j\w\f\c\h\0\u\y\6\g\0\o\t\z\d\1\d\8\r\n\z\o\e\k\6\q\u\b\i\a\t\d\a\l\7\s\c\b\x\7\m\h\m\u\3\v\y\6\q\g\i\h\g\u\x\4\c\m\p\5\9\b\j\2\z\r\h\t\t\e\b\7\b\j\n\w\w\h\8\m\u\g\i\i\h\o\s\u\u\y\o\2\t\k\y\4\c\t\v\i\v\t\x\z\y\u\8\p\4\j\v\h\8\v\f\5\6\m\1\u\e\w\9\r\2\t\v\a\v\b\l\s\4\y\z\g\m\t\o\6\y\b\w\i\n\n\8\v\u\p\0\d\a\7\o\z\d\t\x\n\1\x\3\f\q\j\9\l\o\x\d\m\5\t\n\y\r\7\l\t\6\a\3\d\j\q\i\o\l\j\o\s\x\o\n\w\v\i\o\k\2\9\o\g\q\u\g\5\7\a\x\g\n\6\9\t\w\0\y\7\1\j\6\4\m\i\6\f\q\e\6\1\d\h\y\t\6\r\k\5\i\f\f\6\q\q\t\u\h\g\q\a\5\z\4\2\k\i\i\h\n\l\w\i\t\q\3\b\w\p\m\x\z\o\9\k\4\3\1\p\6\b\c\p\e\5\j\t\y\h\9\d\l\z\w\l\0\3\d\b\n\8\h\g\y\0\q\5\t\6\3\f\t\r\0\m\9\r\x\m\5\0\l\6\7\1\x\l\1\l\z\d\k\8\h\g\j\i\1\l\w\n\t\f\6\e\2\6\p\2\8\q\z\9\y\e\g\l\s\o\3\k\s\o\3\2\p\t\y\0\v\b\o\w\i\d\m\q\z\6\1\l\u\i\n\f\k\h\e\2\c\n\e\t\j\7\b\n\k\p\n\i\i\v\2\s\7\f\1\a\y\n\p\j\f\i\c\0\5\5\r\p\v\e\e\k\i\h\1\v\f\t\5\6\m\0\l\a\6\g\0\j\g\s\2\q\p\7\l\9\5\o\p\7\h\f\m\2\6\w\5\h\r\0\n\y\c\q\x\w ]] 00:18:29.051 00:18:29.051 real 0m1.112s 00:18:29.051 user 0m0.552s 00:18:29.051 sys 0m0.315s 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:18:29.051 ************************************ 00:18:29.051 END TEST dd_flag_nofollow 00:18:29.051 ************************************ 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:29.051 ************************************ 00:18:29.051 START TEST dd_flag_noatime 00:18:29.051 ************************************ 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732086952 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:29.051 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732086953 00:18:29.052 07:15:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:18:29.985 07:15:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:29.985 [2024-11-20 07:15:54.157665] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:29.985 [2024-11-20 07:15:54.157724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59408 ] 00:18:30.242 [2024-11-20 07:15:54.296989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.242 [2024-11-20 07:15:54.330909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.242 [2024-11-20 07:15:54.360254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:30.242  [2024-11-20T07:15:54.703Z] Copying: 512/512 [B] (average 500 kBps) 00:18:30.500 00:18:30.500 07:15:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:30.500 07:15:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732086952 )) 00:18:30.500 07:15:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:30.500 07:15:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732086953 )) 00:18:30.500 07:15:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:30.500 [2024-11-20 07:15:54.523030] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:30.500 [2024-11-20 07:15:54.523089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59427 ] 00:18:30.500 [2024-11-20 07:15:54.663550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.500 [2024-11-20 07:15:54.698646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.758 [2024-11-20 07:15:54.727961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:30.758  [2024-11-20T07:15:54.961Z] Copying: 512/512 [B] (average 500 kBps) 00:18:30.758 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732086954 )) 00:18:30.758 00:18:30.758 real 0m1.746s 00:18:30.758 user 0m0.368s 00:18:30.758 sys 0m0.300s 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:18:30.758 ************************************ 00:18:30.758 END TEST dd_flag_noatime 00:18:30.758 ************************************ 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:30.758 ************************************ 00:18:30.758 START TEST dd_flags_misc 00:18:30.758 ************************************ 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:30.758 07:15:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:30.758 [2024-11-20 07:15:54.932549] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:30.758 [2024-11-20 07:15:54.932623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59450 ] 00:18:31.015 [2024-11-20 07:15:55.076019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.015 [2024-11-20 07:15:55.121715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.015 [2024-11-20 07:15:55.154525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:31.015  [2024-11-20T07:15:55.477Z] Copying: 512/512 [B] (average 500 kBps) 00:18:31.274 00:18:31.274 07:15:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qc5u4v1fodrg00y8n1r7iiguod4n9mkxzzbosnqs58nr21djifpyqx4lg3ugu7b8gku0vkigyo4kzg8tbjyiepsbwda43f7jsrld2muvvveayrukw98fxgpz0i90i3t4qfcsy7oyuf5jafoqiu7w2obqlx0orf8zce1nh766ln79gc7j8slb4bitmn6y25fhzyu918o5abxma6fod1530utano92ffh09k41qgpo9cbptiihqcdx0krgdsq2333i94uwrjet0a4psq97kf7t7rho98c5ecfjub3jln5cxcn85dkpssmiwhrbawpu6riejvap2xlynt01vkdxrzf8nktv5d61vl66lychwj0nvo512w69vicd7o0zzh3vydg7bk0intbkygmbcf2xdesjsz4r71ggscdkr9qf9rjxasufym72x53r15zo6pg0ax6t69au1p0nctrytb18myc2u8vshffi3q3od5dotyj0rxo52lkgbf7vtq1dhf36ypab == \q\c\5\u\4\v\1\f\o\d\r\g\0\0\y\8\n\1\r\7\i\i\g\u\o\d\4\n\9\m\k\x\z\z\b\o\s\n\q\s\5\8\n\r\2\1\d\j\i\f\p\y\q\x\4\l\g\3\u\g\u\7\b\8\g\k\u\0\v\k\i\g\y\o\4\k\z\g\8\t\b\j\y\i\e\p\s\b\w\d\a\4\3\f\7\j\s\r\l\d\2\m\u\v\v\v\e\a\y\r\u\k\w\9\8\f\x\g\p\z\0\i\9\0\i\3\t\4\q\f\c\s\y\7\o\y\u\f\5\j\a\f\o\q\i\u\7\w\2\o\b\q\l\x\0\o\r\f\8\z\c\e\1\n\h\7\6\6\l\n\7\9\g\c\7\j\8\s\l\b\4\b\i\t\m\n\6\y\2\5\f\h\z\y\u\9\1\8\o\5\a\b\x\m\a\6\f\o\d\1\5\3\0\u\t\a\n\o\9\2\f\f\h\0\9\k\4\1\q\g\p\o\9\c\b\p\t\i\i\h\q\c\d\x\0\k\r\g\d\s\q\2\3\3\3\i\9\4\u\w\r\j\e\t\0\a\4\p\s\q\9\7\k\f\7\t\7\r\h\o\9\8\c\5\e\c\f\j\u\b\3\j\l\n\5\c\x\c\n\8\5\d\k\p\s\s\m\i\w\h\r\b\a\w\p\u\6\r\i\e\j\v\a\p\2\x\l\y\n\t\0\1\v\k\d\x\r\z\f\8\n\k\t\v\5\d\6\1\v\l\6\6\l\y\c\h\w\j\0\n\v\o\5\1\2\w\6\9\v\i\c\d\7\o\0\z\z\h\3\v\y\d\g\7\b\k\0\i\n\t\b\k\y\g\m\b\c\f\2\x\d\e\s\j\s\z\4\r\7\1\g\g\s\c\d\k\r\9\q\f\9\r\j\x\a\s\u\f\y\m\7\2\x\5\3\r\1\5\z\o\6\p\g\0\a\x\6\t\6\9\a\u\1\p\0\n\c\t\r\y\t\b\1\8\m\y\c\2\u\8\v\s\h\f\f\i\3\q\3\o\d\5\d\o\t\y\j\0\r\x\o\5\2\l\k\g\b\f\7\v\t\q\1\d\h\f\3\6\y\p\a\b ]] 00:18:31.274 07:15:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:31.274 07:15:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:31.274 [2024-11-20 07:15:55.301643] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:31.274 [2024-11-20 07:15:55.301705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59460 ] 00:18:31.274 [2024-11-20 07:15:55.441341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.533 [2024-11-20 07:15:55.475920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.533 [2024-11-20 07:15:55.505077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:31.533  [2024-11-20T07:15:55.736Z] Copying: 512/512 [B] (average 500 kBps) 00:18:31.533 00:18:31.533 07:15:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qc5u4v1fodrg00y8n1r7iiguod4n9mkxzzbosnqs58nr21djifpyqx4lg3ugu7b8gku0vkigyo4kzg8tbjyiepsbwda43f7jsrld2muvvveayrukw98fxgpz0i90i3t4qfcsy7oyuf5jafoqiu7w2obqlx0orf8zce1nh766ln79gc7j8slb4bitmn6y25fhzyu918o5abxma6fod1530utano92ffh09k41qgpo9cbptiihqcdx0krgdsq2333i94uwrjet0a4psq97kf7t7rho98c5ecfjub3jln5cxcn85dkpssmiwhrbawpu6riejvap2xlynt01vkdxrzf8nktv5d61vl66lychwj0nvo512w69vicd7o0zzh3vydg7bk0intbkygmbcf2xdesjsz4r71ggscdkr9qf9rjxasufym72x53r15zo6pg0ax6t69au1p0nctrytb18myc2u8vshffi3q3od5dotyj0rxo52lkgbf7vtq1dhf36ypab == \q\c\5\u\4\v\1\f\o\d\r\g\0\0\y\8\n\1\r\7\i\i\g\u\o\d\4\n\9\m\k\x\z\z\b\o\s\n\q\s\5\8\n\r\2\1\d\j\i\f\p\y\q\x\4\l\g\3\u\g\u\7\b\8\g\k\u\0\v\k\i\g\y\o\4\k\z\g\8\t\b\j\y\i\e\p\s\b\w\d\a\4\3\f\7\j\s\r\l\d\2\m\u\v\v\v\e\a\y\r\u\k\w\9\8\f\x\g\p\z\0\i\9\0\i\3\t\4\q\f\c\s\y\7\o\y\u\f\5\j\a\f\o\q\i\u\7\w\2\o\b\q\l\x\0\o\r\f\8\z\c\e\1\n\h\7\6\6\l\n\7\9\g\c\7\j\8\s\l\b\4\b\i\t\m\n\6\y\2\5\f\h\z\y\u\9\1\8\o\5\a\b\x\m\a\6\f\o\d\1\5\3\0\u\t\a\n\o\9\2\f\f\h\0\9\k\4\1\q\g\p\o\9\c\b\p\t\i\i\h\q\c\d\x\0\k\r\g\d\s\q\2\3\3\3\i\9\4\u\w\r\j\e\t\0\a\4\p\s\q\9\7\k\f\7\t\7\r\h\o\9\8\c\5\e\c\f\j\u\b\3\j\l\n\5\c\x\c\n\8\5\d\k\p\s\s\m\i\w\h\r\b\a\w\p\u\6\r\i\e\j\v\a\p\2\x\l\y\n\t\0\1\v\k\d\x\r\z\f\8\n\k\t\v\5\d\6\1\v\l\6\6\l\y\c\h\w\j\0\n\v\o\5\1\2\w\6\9\v\i\c\d\7\o\0\z\z\h\3\v\y\d\g\7\b\k\0\i\n\t\b\k\y\g\m\b\c\f\2\x\d\e\s\j\s\z\4\r\7\1\g\g\s\c\d\k\r\9\q\f\9\r\j\x\a\s\u\f\y\m\7\2\x\5\3\r\1\5\z\o\6\p\g\0\a\x\6\t\6\9\a\u\1\p\0\n\c\t\r\y\t\b\1\8\m\y\c\2\u\8\v\s\h\f\f\i\3\q\3\o\d\5\d\o\t\y\j\0\r\x\o\5\2\l\k\g\b\f\7\v\t\q\1\d\h\f\3\6\y\p\a\b ]] 00:18:31.533 07:15:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:31.533 07:15:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:31.533 [2024-11-20 07:15:55.663662] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:31.533 [2024-11-20 07:15:55.663742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59469 ] 00:18:31.854 [2024-11-20 07:15:55.806337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.854 [2024-11-20 07:15:55.841093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.854 [2024-11-20 07:15:55.870952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:31.854  [2024-11-20T07:15:56.057Z] Copying: 512/512 [B] (average 125 kBps) 00:18:31.854 00:18:31.854 07:15:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qc5u4v1fodrg00y8n1r7iiguod4n9mkxzzbosnqs58nr21djifpyqx4lg3ugu7b8gku0vkigyo4kzg8tbjyiepsbwda43f7jsrld2muvvveayrukw98fxgpz0i90i3t4qfcsy7oyuf5jafoqiu7w2obqlx0orf8zce1nh766ln79gc7j8slb4bitmn6y25fhzyu918o5abxma6fod1530utano92ffh09k41qgpo9cbptiihqcdx0krgdsq2333i94uwrjet0a4psq97kf7t7rho98c5ecfjub3jln5cxcn85dkpssmiwhrbawpu6riejvap2xlynt01vkdxrzf8nktv5d61vl66lychwj0nvo512w69vicd7o0zzh3vydg7bk0intbkygmbcf2xdesjsz4r71ggscdkr9qf9rjxasufym72x53r15zo6pg0ax6t69au1p0nctrytb18myc2u8vshffi3q3od5dotyj0rxo52lkgbf7vtq1dhf36ypab == \q\c\5\u\4\v\1\f\o\d\r\g\0\0\y\8\n\1\r\7\i\i\g\u\o\d\4\n\9\m\k\x\z\z\b\o\s\n\q\s\5\8\n\r\2\1\d\j\i\f\p\y\q\x\4\l\g\3\u\g\u\7\b\8\g\k\u\0\v\k\i\g\y\o\4\k\z\g\8\t\b\j\y\i\e\p\s\b\w\d\a\4\3\f\7\j\s\r\l\d\2\m\u\v\v\v\e\a\y\r\u\k\w\9\8\f\x\g\p\z\0\i\9\0\i\3\t\4\q\f\c\s\y\7\o\y\u\f\5\j\a\f\o\q\i\u\7\w\2\o\b\q\l\x\0\o\r\f\8\z\c\e\1\n\h\7\6\6\l\n\7\9\g\c\7\j\8\s\l\b\4\b\i\t\m\n\6\y\2\5\f\h\z\y\u\9\1\8\o\5\a\b\x\m\a\6\f\o\d\1\5\3\0\u\t\a\n\o\9\2\f\f\h\0\9\k\4\1\q\g\p\o\9\c\b\p\t\i\i\h\q\c\d\x\0\k\r\g\d\s\q\2\3\3\3\i\9\4\u\w\r\j\e\t\0\a\4\p\s\q\9\7\k\f\7\t\7\r\h\o\9\8\c\5\e\c\f\j\u\b\3\j\l\n\5\c\x\c\n\8\5\d\k\p\s\s\m\i\w\h\r\b\a\w\p\u\6\r\i\e\j\v\a\p\2\x\l\y\n\t\0\1\v\k\d\x\r\z\f\8\n\k\t\v\5\d\6\1\v\l\6\6\l\y\c\h\w\j\0\n\v\o\5\1\2\w\6\9\v\i\c\d\7\o\0\z\z\h\3\v\y\d\g\7\b\k\0\i\n\t\b\k\y\g\m\b\c\f\2\x\d\e\s\j\s\z\4\r\7\1\g\g\s\c\d\k\r\9\q\f\9\r\j\x\a\s\u\f\y\m\7\2\x\5\3\r\1\5\z\o\6\p\g\0\a\x\6\t\6\9\a\u\1\p\0\n\c\t\r\y\t\b\1\8\m\y\c\2\u\8\v\s\h\f\f\i\3\q\3\o\d\5\d\o\t\y\j\0\r\x\o\5\2\l\k\g\b\f\7\v\t\q\1\d\h\f\3\6\y\p\a\b ]] 00:18:31.854 07:15:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:31.854 07:15:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:31.854 [2024-11-20 07:15:56.027796] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:31.854 [2024-11-20 07:15:56.027856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59474 ] 00:18:32.127 [2024-11-20 07:15:56.166309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.127 [2024-11-20 07:15:56.202071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.127 [2024-11-20 07:15:56.232009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:32.127  [2024-11-20T07:15:56.591Z] Copying: 512/512 [B] (average 250 kBps) 00:18:32.388 00:18:32.388 07:15:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qc5u4v1fodrg00y8n1r7iiguod4n9mkxzzbosnqs58nr21djifpyqx4lg3ugu7b8gku0vkigyo4kzg8tbjyiepsbwda43f7jsrld2muvvveayrukw98fxgpz0i90i3t4qfcsy7oyuf5jafoqiu7w2obqlx0orf8zce1nh766ln79gc7j8slb4bitmn6y25fhzyu918o5abxma6fod1530utano92ffh09k41qgpo9cbptiihqcdx0krgdsq2333i94uwrjet0a4psq97kf7t7rho98c5ecfjub3jln5cxcn85dkpssmiwhrbawpu6riejvap2xlynt01vkdxrzf8nktv5d61vl66lychwj0nvo512w69vicd7o0zzh3vydg7bk0intbkygmbcf2xdesjsz4r71ggscdkr9qf9rjxasufym72x53r15zo6pg0ax6t69au1p0nctrytb18myc2u8vshffi3q3od5dotyj0rxo52lkgbf7vtq1dhf36ypab == \q\c\5\u\4\v\1\f\o\d\r\g\0\0\y\8\n\1\r\7\i\i\g\u\o\d\4\n\9\m\k\x\z\z\b\o\s\n\q\s\5\8\n\r\2\1\d\j\i\f\p\y\q\x\4\l\g\3\u\g\u\7\b\8\g\k\u\0\v\k\i\g\y\o\4\k\z\g\8\t\b\j\y\i\e\p\s\b\w\d\a\4\3\f\7\j\s\r\l\d\2\m\u\v\v\v\e\a\y\r\u\k\w\9\8\f\x\g\p\z\0\i\9\0\i\3\t\4\q\f\c\s\y\7\o\y\u\f\5\j\a\f\o\q\i\u\7\w\2\o\b\q\l\x\0\o\r\f\8\z\c\e\1\n\h\7\6\6\l\n\7\9\g\c\7\j\8\s\l\b\4\b\i\t\m\n\6\y\2\5\f\h\z\y\u\9\1\8\o\5\a\b\x\m\a\6\f\o\d\1\5\3\0\u\t\a\n\o\9\2\f\f\h\0\9\k\4\1\q\g\p\o\9\c\b\p\t\i\i\h\q\c\d\x\0\k\r\g\d\s\q\2\3\3\3\i\9\4\u\w\r\j\e\t\0\a\4\p\s\q\9\7\k\f\7\t\7\r\h\o\9\8\c\5\e\c\f\j\u\b\3\j\l\n\5\c\x\c\n\8\5\d\k\p\s\s\m\i\w\h\r\b\a\w\p\u\6\r\i\e\j\v\a\p\2\x\l\y\n\t\0\1\v\k\d\x\r\z\f\8\n\k\t\v\5\d\6\1\v\l\6\6\l\y\c\h\w\j\0\n\v\o\5\1\2\w\6\9\v\i\c\d\7\o\0\z\z\h\3\v\y\d\g\7\b\k\0\i\n\t\b\k\y\g\m\b\c\f\2\x\d\e\s\j\s\z\4\r\7\1\g\g\s\c\d\k\r\9\q\f\9\r\j\x\a\s\u\f\y\m\7\2\x\5\3\r\1\5\z\o\6\p\g\0\a\x\6\t\6\9\a\u\1\p\0\n\c\t\r\y\t\b\1\8\m\y\c\2\u\8\v\s\h\f\f\i\3\q\3\o\d\5\d\o\t\y\j\0\r\x\o\5\2\l\k\g\b\f\7\v\t\q\1\d\h\f\3\6\y\p\a\b ]] 00:18:32.388 07:15:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:32.388 07:15:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:18:32.388 07:15:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:18:32.388 07:15:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:18:32.388 07:15:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:32.388 07:15:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:32.388 [2024-11-20 07:15:56.393941] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:32.388 [2024-11-20 07:15:56.394002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59484 ] 00:18:32.388 [2024-11-20 07:15:56.533840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.388 [2024-11-20 07:15:56.567565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.646 [2024-11-20 07:15:56.597065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:32.646  [2024-11-20T07:15:56.849Z] Copying: 512/512 [B] (average 500 kBps) 00:18:32.646 00:18:32.646 07:15:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ dsh2wskm06hcz6oqj6rbtwipjncup8j2h7pkge216b0fl36i25su08io99iw1mcg8magy7drjnoaaj5d5iouhfvq4f5rwnmqu3lehoprvrf49l6sg17e1fgx6vhvc6c3bhv5yr9vhkfn9xlvsht677khamgo8y8ii023f84vxsa4x0xj2frvo96jmh7f7wmxh75x7tgpbdjjqk6amfajj46nxj7kmfjt6esx2cn8z2y9fu0t18ztfaiqnmmzho5n36beji0coce66up0cf34e9oiwuzlkfn274ys551xe5529u42gc1vzogaofljr39qm48rqb4awybf0nee48yq2vt32yboj0ywvlrel56cowl0pprz58kz3l556eb6w4dd4lz1yiixrmdsxlqf0blpepdnyvnotf7qa21el4yh5b34bn1fsp7gezivybjk85pdzdyeiiwxwqjnl0d1knxjtg0xpy1g5j4jbrv1tuucvp1rz8w9m6j7fc4du7yykdoi == \d\s\h\2\w\s\k\m\0\6\h\c\z\6\o\q\j\6\r\b\t\w\i\p\j\n\c\u\p\8\j\2\h\7\p\k\g\e\2\1\6\b\0\f\l\3\6\i\2\5\s\u\0\8\i\o\9\9\i\w\1\m\c\g\8\m\a\g\y\7\d\r\j\n\o\a\a\j\5\d\5\i\o\u\h\f\v\q\4\f\5\r\w\n\m\q\u\3\l\e\h\o\p\r\v\r\f\4\9\l\6\s\g\1\7\e\1\f\g\x\6\v\h\v\c\6\c\3\b\h\v\5\y\r\9\v\h\k\f\n\9\x\l\v\s\h\t\6\7\7\k\h\a\m\g\o\8\y\8\i\i\0\2\3\f\8\4\v\x\s\a\4\x\0\x\j\2\f\r\v\o\9\6\j\m\h\7\f\7\w\m\x\h\7\5\x\7\t\g\p\b\d\j\j\q\k\6\a\m\f\a\j\j\4\6\n\x\j\7\k\m\f\j\t\6\e\s\x\2\c\n\8\z\2\y\9\f\u\0\t\1\8\z\t\f\a\i\q\n\m\m\z\h\o\5\n\3\6\b\e\j\i\0\c\o\c\e\6\6\u\p\0\c\f\3\4\e\9\o\i\w\u\z\l\k\f\n\2\7\4\y\s\5\5\1\x\e\5\5\2\9\u\4\2\g\c\1\v\z\o\g\a\o\f\l\j\r\3\9\q\m\4\8\r\q\b\4\a\w\y\b\f\0\n\e\e\4\8\y\q\2\v\t\3\2\y\b\o\j\0\y\w\v\l\r\e\l\5\6\c\o\w\l\0\p\p\r\z\5\8\k\z\3\l\5\5\6\e\b\6\w\4\d\d\4\l\z\1\y\i\i\x\r\m\d\s\x\l\q\f\0\b\l\p\e\p\d\n\y\v\n\o\t\f\7\q\a\2\1\e\l\4\y\h\5\b\3\4\b\n\1\f\s\p\7\g\e\z\i\v\y\b\j\k\8\5\p\d\z\d\y\e\i\i\w\x\w\q\j\n\l\0\d\1\k\n\x\j\t\g\0\x\p\y\1\g\5\j\4\j\b\r\v\1\t\u\u\c\v\p\1\r\z\8\w\9\m\6\j\7\f\c\4\d\u\7\y\y\k\d\o\i ]] 00:18:32.646 07:15:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:32.646 07:15:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:32.646 [2024-11-20 07:15:56.749900] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:32.646 [2024-11-20 07:15:56.749964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59493 ] 00:18:32.904 [2024-11-20 07:15:56.889316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.904 [2024-11-20 07:15:56.924425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.904 [2024-11-20 07:15:56.954436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:32.904  [2024-11-20T07:15:57.107Z] Copying: 512/512 [B] (average 500 kBps) 00:18:32.904 00:18:32.904 07:15:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ dsh2wskm06hcz6oqj6rbtwipjncup8j2h7pkge216b0fl36i25su08io99iw1mcg8magy7drjnoaaj5d5iouhfvq4f5rwnmqu3lehoprvrf49l6sg17e1fgx6vhvc6c3bhv5yr9vhkfn9xlvsht677khamgo8y8ii023f84vxsa4x0xj2frvo96jmh7f7wmxh75x7tgpbdjjqk6amfajj46nxj7kmfjt6esx2cn8z2y9fu0t18ztfaiqnmmzho5n36beji0coce66up0cf34e9oiwuzlkfn274ys551xe5529u42gc1vzogaofljr39qm48rqb4awybf0nee48yq2vt32yboj0ywvlrel56cowl0pprz58kz3l556eb6w4dd4lz1yiixrmdsxlqf0blpepdnyvnotf7qa21el4yh5b34bn1fsp7gezivybjk85pdzdyeiiwxwqjnl0d1knxjtg0xpy1g5j4jbrv1tuucvp1rz8w9m6j7fc4du7yykdoi == \d\s\h\2\w\s\k\m\0\6\h\c\z\6\o\q\j\6\r\b\t\w\i\p\j\n\c\u\p\8\j\2\h\7\p\k\g\e\2\1\6\b\0\f\l\3\6\i\2\5\s\u\0\8\i\o\9\9\i\w\1\m\c\g\8\m\a\g\y\7\d\r\j\n\o\a\a\j\5\d\5\i\o\u\h\f\v\q\4\f\5\r\w\n\m\q\u\3\l\e\h\o\p\r\v\r\f\4\9\l\6\s\g\1\7\e\1\f\g\x\6\v\h\v\c\6\c\3\b\h\v\5\y\r\9\v\h\k\f\n\9\x\l\v\s\h\t\6\7\7\k\h\a\m\g\o\8\y\8\i\i\0\2\3\f\8\4\v\x\s\a\4\x\0\x\j\2\f\r\v\o\9\6\j\m\h\7\f\7\w\m\x\h\7\5\x\7\t\g\p\b\d\j\j\q\k\6\a\m\f\a\j\j\4\6\n\x\j\7\k\m\f\j\t\6\e\s\x\2\c\n\8\z\2\y\9\f\u\0\t\1\8\z\t\f\a\i\q\n\m\m\z\h\o\5\n\3\6\b\e\j\i\0\c\o\c\e\6\6\u\p\0\c\f\3\4\e\9\o\i\w\u\z\l\k\f\n\2\7\4\y\s\5\5\1\x\e\5\5\2\9\u\4\2\g\c\1\v\z\o\g\a\o\f\l\j\r\3\9\q\m\4\8\r\q\b\4\a\w\y\b\f\0\n\e\e\4\8\y\q\2\v\t\3\2\y\b\o\j\0\y\w\v\l\r\e\l\5\6\c\o\w\l\0\p\p\r\z\5\8\k\z\3\l\5\5\6\e\b\6\w\4\d\d\4\l\z\1\y\i\i\x\r\m\d\s\x\l\q\f\0\b\l\p\e\p\d\n\y\v\n\o\t\f\7\q\a\2\1\e\l\4\y\h\5\b\3\4\b\n\1\f\s\p\7\g\e\z\i\v\y\b\j\k\8\5\p\d\z\d\y\e\i\i\w\x\w\q\j\n\l\0\d\1\k\n\x\j\t\g\0\x\p\y\1\g\5\j\4\j\b\r\v\1\t\u\u\c\v\p\1\r\z\8\w\9\m\6\j\7\f\c\4\d\u\7\y\y\k\d\o\i ]] 00:18:32.904 07:15:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:32.904 07:15:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:33.162 [2024-11-20 07:15:57.109013] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:33.162 [2024-11-20 07:15:57.109080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59497 ] 00:18:33.162 [2024-11-20 07:15:57.241174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.162 [2024-11-20 07:15:57.275841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.162 [2024-11-20 07:15:57.305893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:33.162  [2024-11-20T07:15:57.635Z] Copying: 512/512 [B] (average 62 kBps) 00:18:33.432 00:18:33.432 07:15:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ dsh2wskm06hcz6oqj6rbtwipjncup8j2h7pkge216b0fl36i25su08io99iw1mcg8magy7drjnoaaj5d5iouhfvq4f5rwnmqu3lehoprvrf49l6sg17e1fgx6vhvc6c3bhv5yr9vhkfn9xlvsht677khamgo8y8ii023f84vxsa4x0xj2frvo96jmh7f7wmxh75x7tgpbdjjqk6amfajj46nxj7kmfjt6esx2cn8z2y9fu0t18ztfaiqnmmzho5n36beji0coce66up0cf34e9oiwuzlkfn274ys551xe5529u42gc1vzogaofljr39qm48rqb4awybf0nee48yq2vt32yboj0ywvlrel56cowl0pprz58kz3l556eb6w4dd4lz1yiixrmdsxlqf0blpepdnyvnotf7qa21el4yh5b34bn1fsp7gezivybjk85pdzdyeiiwxwqjnl0d1knxjtg0xpy1g5j4jbrv1tuucvp1rz8w9m6j7fc4du7yykdoi == \d\s\h\2\w\s\k\m\0\6\h\c\z\6\o\q\j\6\r\b\t\w\i\p\j\n\c\u\p\8\j\2\h\7\p\k\g\e\2\1\6\b\0\f\l\3\6\i\2\5\s\u\0\8\i\o\9\9\i\w\1\m\c\g\8\m\a\g\y\7\d\r\j\n\o\a\a\j\5\d\5\i\o\u\h\f\v\q\4\f\5\r\w\n\m\q\u\3\l\e\h\o\p\r\v\r\f\4\9\l\6\s\g\1\7\e\1\f\g\x\6\v\h\v\c\6\c\3\b\h\v\5\y\r\9\v\h\k\f\n\9\x\l\v\s\h\t\6\7\7\k\h\a\m\g\o\8\y\8\i\i\0\2\3\f\8\4\v\x\s\a\4\x\0\x\j\2\f\r\v\o\9\6\j\m\h\7\f\7\w\m\x\h\7\5\x\7\t\g\p\b\d\j\j\q\k\6\a\m\f\a\j\j\4\6\n\x\j\7\k\m\f\j\t\6\e\s\x\2\c\n\8\z\2\y\9\f\u\0\t\1\8\z\t\f\a\i\q\n\m\m\z\h\o\5\n\3\6\b\e\j\i\0\c\o\c\e\6\6\u\p\0\c\f\3\4\e\9\o\i\w\u\z\l\k\f\n\2\7\4\y\s\5\5\1\x\e\5\5\2\9\u\4\2\g\c\1\v\z\o\g\a\o\f\l\j\r\3\9\q\m\4\8\r\q\b\4\a\w\y\b\f\0\n\e\e\4\8\y\q\2\v\t\3\2\y\b\o\j\0\y\w\v\l\r\e\l\5\6\c\o\w\l\0\p\p\r\z\5\8\k\z\3\l\5\5\6\e\b\6\w\4\d\d\4\l\z\1\y\i\i\x\r\m\d\s\x\l\q\f\0\b\l\p\e\p\d\n\y\v\n\o\t\f\7\q\a\2\1\e\l\4\y\h\5\b\3\4\b\n\1\f\s\p\7\g\e\z\i\v\y\b\j\k\8\5\p\d\z\d\y\e\i\i\w\x\w\q\j\n\l\0\d\1\k\n\x\j\t\g\0\x\p\y\1\g\5\j\4\j\b\r\v\1\t\u\u\c\v\p\1\r\z\8\w\9\m\6\j\7\f\c\4\d\u\7\y\y\k\d\o\i ]] 00:18:33.432 07:15:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:33.432 07:15:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:33.432 [2024-11-20 07:15:57.468113] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:33.432 [2024-11-20 07:15:57.468175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59507 ] 00:18:33.432 [2024-11-20 07:15:57.609924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.693 [2024-11-20 07:15:57.644574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.693 [2024-11-20 07:15:57.674324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:33.693  [2024-11-20T07:15:57.896Z] Copying: 512/512 [B] (average 250 kBps) 00:18:33.693 00:18:33.693 07:15:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ dsh2wskm06hcz6oqj6rbtwipjncup8j2h7pkge216b0fl36i25su08io99iw1mcg8magy7drjnoaaj5d5iouhfvq4f5rwnmqu3lehoprvrf49l6sg17e1fgx6vhvc6c3bhv5yr9vhkfn9xlvsht677khamgo8y8ii023f84vxsa4x0xj2frvo96jmh7f7wmxh75x7tgpbdjjqk6amfajj46nxj7kmfjt6esx2cn8z2y9fu0t18ztfaiqnmmzho5n36beji0coce66up0cf34e9oiwuzlkfn274ys551xe5529u42gc1vzogaofljr39qm48rqb4awybf0nee48yq2vt32yboj0ywvlrel56cowl0pprz58kz3l556eb6w4dd4lz1yiixrmdsxlqf0blpepdnyvnotf7qa21el4yh5b34bn1fsp7gezivybjk85pdzdyeiiwxwqjnl0d1knxjtg0xpy1g5j4jbrv1tuucvp1rz8w9m6j7fc4du7yykdoi == \d\s\h\2\w\s\k\m\0\6\h\c\z\6\o\q\j\6\r\b\t\w\i\p\j\n\c\u\p\8\j\2\h\7\p\k\g\e\2\1\6\b\0\f\l\3\6\i\2\5\s\u\0\8\i\o\9\9\i\w\1\m\c\g\8\m\a\g\y\7\d\r\j\n\o\a\a\j\5\d\5\i\o\u\h\f\v\q\4\f\5\r\w\n\m\q\u\3\l\e\h\o\p\r\v\r\f\4\9\l\6\s\g\1\7\e\1\f\g\x\6\v\h\v\c\6\c\3\b\h\v\5\y\r\9\v\h\k\f\n\9\x\l\v\s\h\t\6\7\7\k\h\a\m\g\o\8\y\8\i\i\0\2\3\f\8\4\v\x\s\a\4\x\0\x\j\2\f\r\v\o\9\6\j\m\h\7\f\7\w\m\x\h\7\5\x\7\t\g\p\b\d\j\j\q\k\6\a\m\f\a\j\j\4\6\n\x\j\7\k\m\f\j\t\6\e\s\x\2\c\n\8\z\2\y\9\f\u\0\t\1\8\z\t\f\a\i\q\n\m\m\z\h\o\5\n\3\6\b\e\j\i\0\c\o\c\e\6\6\u\p\0\c\f\3\4\e\9\o\i\w\u\z\l\k\f\n\2\7\4\y\s\5\5\1\x\e\5\5\2\9\u\4\2\g\c\1\v\z\o\g\a\o\f\l\j\r\3\9\q\m\4\8\r\q\b\4\a\w\y\b\f\0\n\e\e\4\8\y\q\2\v\t\3\2\y\b\o\j\0\y\w\v\l\r\e\l\5\6\c\o\w\l\0\p\p\r\z\5\8\k\z\3\l\5\5\6\e\b\6\w\4\d\d\4\l\z\1\y\i\i\x\r\m\d\s\x\l\q\f\0\b\l\p\e\p\d\n\y\v\n\o\t\f\7\q\a\2\1\e\l\4\y\h\5\b\3\4\b\n\1\f\s\p\7\g\e\z\i\v\y\b\j\k\8\5\p\d\z\d\y\e\i\i\w\x\w\q\j\n\l\0\d\1\k\n\x\j\t\g\0\x\p\y\1\g\5\j\4\j\b\r\v\1\t\u\u\c\v\p\1\r\z\8\w\9\m\6\j\7\f\c\4\d\u\7\y\y\k\d\o\i ]] 00:18:33.693 00:18:33.693 real 0m2.906s 00:18:33.693 user 0m1.466s 00:18:33.693 sys 0m1.147s 00:18:33.693 07:15:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.693 07:15:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:18:33.693 ************************************ 00:18:33.693 END TEST dd_flags_misc 00:18:33.693 ************************************ 00:18:33.693 07:15:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:18:33.693 07:15:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:18:33.693 * Second test run, disabling liburing, forcing AIO 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:33.694 ************************************ 00:18:33.694 START TEST dd_flag_append_forced_aio 00:18:33.694 ************************************ 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=7nehpt1kv6ozsq7iuwsfmmc5z6rpu4by 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=s3dr1eki0wdvdkd72ardojsxk1gd69cc 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 7nehpt1kv6ozsq7iuwsfmmc5z6rpu4by 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s s3dr1eki0wdvdkd72ardojsxk1gd69cc 00:18:33.694 07:15:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:18:33.694 [2024-11-20 07:15:57.874436] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:33.694 [2024-11-20 07:15:57.874495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59535 ] 00:18:33.951 [2024-11-20 07:15:58.013990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.951 [2024-11-20 07:15:58.048470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.951 [2024-11-20 07:15:58.078902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:33.951  [2024-11-20T07:15:58.413Z] Copying: 32/32 [B] (average 31 kBps) 00:18:34.210 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ s3dr1eki0wdvdkd72ardojsxk1gd69cc7nehpt1kv6ozsq7iuwsfmmc5z6rpu4by == \s\3\d\r\1\e\k\i\0\w\d\v\d\k\d\7\2\a\r\d\o\j\s\x\k\1\g\d\6\9\c\c\7\n\e\h\p\t\1\k\v\6\o\z\s\q\7\i\u\w\s\f\m\m\c\5\z\6\r\p\u\4\b\y ]] 00:18:34.210 00:18:34.210 real 0m0.385s 00:18:34.210 user 0m0.184s 00:18:34.210 sys 0m0.082s 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:34.210 ************************************ 00:18:34.210 END TEST dd_flag_append_forced_aio 00:18:34.210 ************************************ 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:34.210 ************************************ 00:18:34.210 START TEST dd_flag_directory_forced_aio 00:18:34.210 ************************************ 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:34.210 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:34.210 [2024-11-20 07:15:58.288701] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:34.210 [2024-11-20 07:15:58.288761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59562 ] 00:18:34.469 [2024-11-20 07:15:58.428811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.469 [2024-11-20 07:15:58.462777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.469 [2024-11-20 07:15:58.492618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:34.469 [2024-11-20 07:15:58.514885] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:34.469 [2024-11-20 07:15:58.514924] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:34.469 [2024-11-20 07:15:58.514937] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:34.469 [2024-11-20 07:15:58.569878] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:34.469 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:34.469 [2024-11-20 07:15:58.648559] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:34.469 [2024-11-20 07:15:58.648621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59571 ] 00:18:34.727 [2024-11-20 07:15:58.788766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.727 [2024-11-20 07:15:58.823093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.727 [2024-11-20 07:15:58.852892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:34.727 [2024-11-20 07:15:58.875093] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:34.727 [2024-11-20 07:15:58.875138] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:34.727 [2024-11-20 07:15:58.875152] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:34.987 [2024-11-20 07:15:58.930878] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:34.987 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:18:34.987 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.987 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:18:34.987 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:18:34.987 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:18:34.987 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.987 00:18:34.987 real 0m0.720s 00:18:34.987 user 0m0.354s 00:18:34.987 sys 0m0.159s 00:18:34.987 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.987 07:15:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:34.987 ************************************ 00:18:34.987 END TEST dd_flag_directory_forced_aio 00:18:34.987 ************************************ 00:18:34.987 07:15:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:18:34.987 07:15:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:34.987 07:15:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.987 07:15:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:34.987 ************************************ 00:18:34.987 START TEST dd_flag_nofollow_forced_aio 00:18:34.987 ************************************ 00:18:34.987 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:18:34.987 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:34.987 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:34.987 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:34.987 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:34.987 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:34.987 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:18:34.988 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:34.988 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.988 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.988 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.988 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.988 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.988 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.988 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.988 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:34.988 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:34.988 [2024-11-20 07:15:59.050910] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:34.988 [2024-11-20 07:15:59.050976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59594 ] 00:18:35.259 [2024-11-20 07:15:59.192504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.259 [2024-11-20 07:15:59.226981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.259 [2024-11-20 07:15:59.257561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.259 [2024-11-20 07:15:59.280545] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:35.259 [2024-11-20 07:15:59.280590] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:35.259 [2024-11-20 07:15:59.280603] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:35.259 [2024-11-20 07:15:59.336768] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:35.259 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:35.259 [2024-11-20 07:15:59.413985] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:35.259 [2024-11-20 07:15:59.414052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59604 ] 00:18:35.516 [2024-11-20 07:15:59.546110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.516 [2024-11-20 07:15:59.580496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.516 [2024-11-20 07:15:59.610350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.516 [2024-11-20 07:15:59.632343] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:35.516 [2024-11-20 07:15:59.632377] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:35.516 [2024-11-20 07:15:59.632388] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:35.516 [2024-11-20 07:15:59.687195] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:35.774 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:18:35.774 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:35.774 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:18:35.774 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:18:35.774 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:18:35.774 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:35.774 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:18:35.774 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:18:35.774 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:35.774 07:15:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:35.774 [2024-11-20 07:15:59.774656] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:35.774 [2024-11-20 07:15:59.774719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59611 ] 00:18:35.774 [2024-11-20 07:15:59.914174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.774 [2024-11-20 07:15:59.948550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.031 [2024-11-20 07:15:59.978394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:36.031  [2024-11-20T07:16:00.234Z] Copying: 512/512 [B] (average 500 kBps) 00:18:36.031 00:18:36.031 ************************************ 00:18:36.031 END TEST dd_flag_nofollow_forced_aio 00:18:36.031 ************************************ 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ f2hvwk1dkqf82n1utrjysfq7szdnlyv97ma14tmvtcmnoewt6e9yqjnxabny3ma1oore602qzdcuvoaizuf6mn8ohc8vzxhjk5t27xmhd1q3i46ijdq4vspihfp42mus2is5uojbi96lo4j8sq8zl7btxpqql5u6lgeu0zc9h1mjsm7gfjurd0nwl79e1qwp78qr0b008y3o0s7av04h20xj3jmvgrntwpgi3272wegk5yb08x4krtvcj91apypakhnpj3zatr8iizoqzb9dj4794bgplndxpdcrutjn5pp1cvl673wy2lo0vn4vdtjj8aczk9zgg885x4r99id2hucdlut9vveha37vytiylzzoggag3chpmjtmhncoi2132r4mp91mguoq09smh8w8x8hratjoepuk9t7t3ej9fy1hc3i5al27ej6mz4od959pgn0eco2q8sk9cj4qrieeotn4nmxo2e9dshtr013xo55lugevxkf2064t7zmmrdjk == \f\2\h\v\w\k\1\d\k\q\f\8\2\n\1\u\t\r\j\y\s\f\q\7\s\z\d\n\l\y\v\9\7\m\a\1\4\t\m\v\t\c\m\n\o\e\w\t\6\e\9\y\q\j\n\x\a\b\n\y\3\m\a\1\o\o\r\e\6\0\2\q\z\d\c\u\v\o\a\i\z\u\f\6\m\n\8\o\h\c\8\v\z\x\h\j\k\5\t\2\7\x\m\h\d\1\q\3\i\4\6\i\j\d\q\4\v\s\p\i\h\f\p\4\2\m\u\s\2\i\s\5\u\o\j\b\i\9\6\l\o\4\j\8\s\q\8\z\l\7\b\t\x\p\q\q\l\5\u\6\l\g\e\u\0\z\c\9\h\1\m\j\s\m\7\g\f\j\u\r\d\0\n\w\l\7\9\e\1\q\w\p\7\8\q\r\0\b\0\0\8\y\3\o\0\s\7\a\v\0\4\h\2\0\x\j\3\j\m\v\g\r\n\t\w\p\g\i\3\2\7\2\w\e\g\k\5\y\b\0\8\x\4\k\r\t\v\c\j\9\1\a\p\y\p\a\k\h\n\p\j\3\z\a\t\r\8\i\i\z\o\q\z\b\9\d\j\4\7\9\4\b\g\p\l\n\d\x\p\d\c\r\u\t\j\n\5\p\p\1\c\v\l\6\7\3\w\y\2\l\o\0\v\n\4\v\d\t\j\j\8\a\c\z\k\9\z\g\g\8\8\5\x\4\r\9\9\i\d\2\h\u\c\d\l\u\t\9\v\v\e\h\a\3\7\v\y\t\i\y\l\z\z\o\g\g\a\g\3\c\h\p\m\j\t\m\h\n\c\o\i\2\1\3\2\r\4\m\p\9\1\m\g\u\o\q\0\9\s\m\h\8\w\8\x\8\h\r\a\t\j\o\e\p\u\k\9\t\7\t\3\e\j\9\f\y\1\h\c\3\i\5\a\l\2\7\e\j\6\m\z\4\o\d\9\5\9\p\g\n\0\e\c\o\2\q\8\s\k\9\c\j\4\q\r\i\e\e\o\t\n\4\n\m\x\o\2\e\9\d\s\h\t\r\0\1\3\x\o\5\5\l\u\g\e\v\x\k\f\2\0\6\4\t\7\z\m\m\r\d\j\k ]] 00:18:36.031 00:18:36.031 real 0m1.106s 00:18:36.031 user 0m0.539s 00:18:36.031 sys 0m0.242s 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:36.031 ************************************ 00:18:36.031 START TEST dd_flag_noatime_forced_aio 00:18:36.031 ************************************ 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732086960 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732086960 00:18:36.031 07:16:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:18:37.402 07:16:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:37.402 [2024-11-20 07:16:01.218067] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:37.402 [2024-11-20 07:16:01.218133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59646 ] 00:18:37.402 [2024-11-20 07:16:01.356111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.402 [2024-11-20 07:16:01.390831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.402 [2024-11-20 07:16:01.420960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:37.402  [2024-11-20T07:16:01.605Z] Copying: 512/512 [B] (average 500 kBps) 00:18:37.402 00:18:37.402 07:16:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:37.402 07:16:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732086960 )) 00:18:37.402 07:16:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:37.402 07:16:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732086960 )) 00:18:37.402 07:16:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:37.660 [2024-11-20 07:16:01.604482] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:37.660 [2024-11-20 07:16:01.604542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59658 ] 00:18:37.660 [2024-11-20 07:16:01.744871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.660 [2024-11-20 07:16:01.779030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.660 [2024-11-20 07:16:01.808682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:37.660  [2024-11-20T07:16:02.121Z] Copying: 512/512 [B] (average 500 kBps) 00:18:37.918 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:37.918 ************************************ 00:18:37.918 END TEST dd_flag_noatime_forced_aio 00:18:37.918 ************************************ 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732086961 )) 00:18:37.918 00:18:37.918 real 0m1.789s 00:18:37.918 user 0m0.369s 00:18:37.918 sys 0m0.182s 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:37.918 ************************************ 00:18:37.918 START TEST dd_flags_misc_forced_aio 00:18:37.918 ************************************ 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:37.918 07:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:37.919 [2024-11-20 07:16:02.032642] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:37.919 [2024-11-20 07:16:02.032810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59684 ] 00:18:38.178 [2024-11-20 07:16:02.173511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.178 [2024-11-20 07:16:02.208195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.178 [2024-11-20 07:16:02.237870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:38.178  [2024-11-20T07:16:02.639Z] Copying: 512/512 [B] (average 500 kBps) 00:18:38.436 00:18:38.436 07:16:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hto7kq8k1x2hzdvclkk8azgyw17kejkh91gqyggj3ooqr7ihwohx0z75zwkogu7zj0d53cpbwyl9u3sgwjuzjwh3gsd3an6v36yz5r2ojvqvzur5hshc9u5116sv6tkkxa9k7c11g7oe0mgwzxkhke161vdv8i2lb4hkpvkp849hjipxfk9o7hchuro8hyudncm7ywp3ty4bhuanm7d8x63ssyayfb2v3bm0dgkhaccbjb11bxj2nsr3w1r0p7lwclht7894lnup4sq9cixhdyq3ynhnhmljt587kcaivsw8fzhsgcor7hia0np3j5q1l1giqc76fcn486m38yerf30h0ea5us66kfs15j1fpx193jhzbui1w9klnmnk36ju2f9reg8h1mvm5qshfdiugsm9byshz1rtq3noq7itmdrkawg6ve71kelzpbjk7ymry7csjljnxreh4bn4ebsrhn0ofdzes0zo17lx3ax9txuz1pzd1ltb3j5rxkkc6yfo == \h\t\o\7\k\q\8\k\1\x\2\h\z\d\v\c\l\k\k\8\a\z\g\y\w\1\7\k\e\j\k\h\9\1\g\q\y\g\g\j\3\o\o\q\r\7\i\h\w\o\h\x\0\z\7\5\z\w\k\o\g\u\7\z\j\0\d\5\3\c\p\b\w\y\l\9\u\3\s\g\w\j\u\z\j\w\h\3\g\s\d\3\a\n\6\v\3\6\y\z\5\r\2\o\j\v\q\v\z\u\r\5\h\s\h\c\9\u\5\1\1\6\s\v\6\t\k\k\x\a\9\k\7\c\1\1\g\7\o\e\0\m\g\w\z\x\k\h\k\e\1\6\1\v\d\v\8\i\2\l\b\4\h\k\p\v\k\p\8\4\9\h\j\i\p\x\f\k\9\o\7\h\c\h\u\r\o\8\h\y\u\d\n\c\m\7\y\w\p\3\t\y\4\b\h\u\a\n\m\7\d\8\x\6\3\s\s\y\a\y\f\b\2\v\3\b\m\0\d\g\k\h\a\c\c\b\j\b\1\1\b\x\j\2\n\s\r\3\w\1\r\0\p\7\l\w\c\l\h\t\7\8\9\4\l\n\u\p\4\s\q\9\c\i\x\h\d\y\q\3\y\n\h\n\h\m\l\j\t\5\8\7\k\c\a\i\v\s\w\8\f\z\h\s\g\c\o\r\7\h\i\a\0\n\p\3\j\5\q\1\l\1\g\i\q\c\7\6\f\c\n\4\8\6\m\3\8\y\e\r\f\3\0\h\0\e\a\5\u\s\6\6\k\f\s\1\5\j\1\f\p\x\1\9\3\j\h\z\b\u\i\1\w\9\k\l\n\m\n\k\3\6\j\u\2\f\9\r\e\g\8\h\1\m\v\m\5\q\s\h\f\d\i\u\g\s\m\9\b\y\s\h\z\1\r\t\q\3\n\o\q\7\i\t\m\d\r\k\a\w\g\6\v\e\7\1\k\e\l\z\p\b\j\k\7\y\m\r\y\7\c\s\j\l\j\n\x\r\e\h\4\b\n\4\e\b\s\r\h\n\0\o\f\d\z\e\s\0\z\o\1\7\l\x\3\a\x\9\t\x\u\z\1\p\z\d\1\l\t\b\3\j\5\r\x\k\k\c\6\y\f\o ]] 00:18:38.436 07:16:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:38.436 07:16:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:38.436 [2024-11-20 07:16:02.429259] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:38.436 [2024-11-20 07:16:02.429321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59692 ] 00:18:38.436 [2024-11-20 07:16:02.562583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.436 [2024-11-20 07:16:02.597293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.436 [2024-11-20 07:16:02.629285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:38.694  [2024-11-20T07:16:02.897Z] Copying: 512/512 [B] (average 500 kBps) 00:18:38.694 00:18:38.694 07:16:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hto7kq8k1x2hzdvclkk8azgyw17kejkh91gqyggj3ooqr7ihwohx0z75zwkogu7zj0d53cpbwyl9u3sgwjuzjwh3gsd3an6v36yz5r2ojvqvzur5hshc9u5116sv6tkkxa9k7c11g7oe0mgwzxkhke161vdv8i2lb4hkpvkp849hjipxfk9o7hchuro8hyudncm7ywp3ty4bhuanm7d8x63ssyayfb2v3bm0dgkhaccbjb11bxj2nsr3w1r0p7lwclht7894lnup4sq9cixhdyq3ynhnhmljt587kcaivsw8fzhsgcor7hia0np3j5q1l1giqc76fcn486m38yerf30h0ea5us66kfs15j1fpx193jhzbui1w9klnmnk36ju2f9reg8h1mvm5qshfdiugsm9byshz1rtq3noq7itmdrkawg6ve71kelzpbjk7ymry7csjljnxreh4bn4ebsrhn0ofdzes0zo17lx3ax9txuz1pzd1ltb3j5rxkkc6yfo == \h\t\o\7\k\q\8\k\1\x\2\h\z\d\v\c\l\k\k\8\a\z\g\y\w\1\7\k\e\j\k\h\9\1\g\q\y\g\g\j\3\o\o\q\r\7\i\h\w\o\h\x\0\z\7\5\z\w\k\o\g\u\7\z\j\0\d\5\3\c\p\b\w\y\l\9\u\3\s\g\w\j\u\z\j\w\h\3\g\s\d\3\a\n\6\v\3\6\y\z\5\r\2\o\j\v\q\v\z\u\r\5\h\s\h\c\9\u\5\1\1\6\s\v\6\t\k\k\x\a\9\k\7\c\1\1\g\7\o\e\0\m\g\w\z\x\k\h\k\e\1\6\1\v\d\v\8\i\2\l\b\4\h\k\p\v\k\p\8\4\9\h\j\i\p\x\f\k\9\o\7\h\c\h\u\r\o\8\h\y\u\d\n\c\m\7\y\w\p\3\t\y\4\b\h\u\a\n\m\7\d\8\x\6\3\s\s\y\a\y\f\b\2\v\3\b\m\0\d\g\k\h\a\c\c\b\j\b\1\1\b\x\j\2\n\s\r\3\w\1\r\0\p\7\l\w\c\l\h\t\7\8\9\4\l\n\u\p\4\s\q\9\c\i\x\h\d\y\q\3\y\n\h\n\h\m\l\j\t\5\8\7\k\c\a\i\v\s\w\8\f\z\h\s\g\c\o\r\7\h\i\a\0\n\p\3\j\5\q\1\l\1\g\i\q\c\7\6\f\c\n\4\8\6\m\3\8\y\e\r\f\3\0\h\0\e\a\5\u\s\6\6\k\f\s\1\5\j\1\f\p\x\1\9\3\j\h\z\b\u\i\1\w\9\k\l\n\m\n\k\3\6\j\u\2\f\9\r\e\g\8\h\1\m\v\m\5\q\s\h\f\d\i\u\g\s\m\9\b\y\s\h\z\1\r\t\q\3\n\o\q\7\i\t\m\d\r\k\a\w\g\6\v\e\7\1\k\e\l\z\p\b\j\k\7\y\m\r\y\7\c\s\j\l\j\n\x\r\e\h\4\b\n\4\e\b\s\r\h\n\0\o\f\d\z\e\s\0\z\o\1\7\l\x\3\a\x\9\t\x\u\z\1\p\z\d\1\l\t\b\3\j\5\r\x\k\k\c\6\y\f\o ]] 00:18:38.694 07:16:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:38.694 07:16:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:38.694 [2024-11-20 07:16:02.802363] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:38.694 [2024-11-20 07:16:02.802425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59699 ] 00:18:38.952 [2024-11-20 07:16:02.941701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.952 [2024-11-20 07:16:02.976309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.952 [2024-11-20 07:16:03.005947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:38.952  [2024-11-20T07:16:03.155Z] Copying: 512/512 [B] (average 166 kBps) 00:18:38.952 00:18:38.953 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hto7kq8k1x2hzdvclkk8azgyw17kejkh91gqyggj3ooqr7ihwohx0z75zwkogu7zj0d53cpbwyl9u3sgwjuzjwh3gsd3an6v36yz5r2ojvqvzur5hshc9u5116sv6tkkxa9k7c11g7oe0mgwzxkhke161vdv8i2lb4hkpvkp849hjipxfk9o7hchuro8hyudncm7ywp3ty4bhuanm7d8x63ssyayfb2v3bm0dgkhaccbjb11bxj2nsr3w1r0p7lwclht7894lnup4sq9cixhdyq3ynhnhmljt587kcaivsw8fzhsgcor7hia0np3j5q1l1giqc76fcn486m38yerf30h0ea5us66kfs15j1fpx193jhzbui1w9klnmnk36ju2f9reg8h1mvm5qshfdiugsm9byshz1rtq3noq7itmdrkawg6ve71kelzpbjk7ymry7csjljnxreh4bn4ebsrhn0ofdzes0zo17lx3ax9txuz1pzd1ltb3j5rxkkc6yfo == \h\t\o\7\k\q\8\k\1\x\2\h\z\d\v\c\l\k\k\8\a\z\g\y\w\1\7\k\e\j\k\h\9\1\g\q\y\g\g\j\3\o\o\q\r\7\i\h\w\o\h\x\0\z\7\5\z\w\k\o\g\u\7\z\j\0\d\5\3\c\p\b\w\y\l\9\u\3\s\g\w\j\u\z\j\w\h\3\g\s\d\3\a\n\6\v\3\6\y\z\5\r\2\o\j\v\q\v\z\u\r\5\h\s\h\c\9\u\5\1\1\6\s\v\6\t\k\k\x\a\9\k\7\c\1\1\g\7\o\e\0\m\g\w\z\x\k\h\k\e\1\6\1\v\d\v\8\i\2\l\b\4\h\k\p\v\k\p\8\4\9\h\j\i\p\x\f\k\9\o\7\h\c\h\u\r\o\8\h\y\u\d\n\c\m\7\y\w\p\3\t\y\4\b\h\u\a\n\m\7\d\8\x\6\3\s\s\y\a\y\f\b\2\v\3\b\m\0\d\g\k\h\a\c\c\b\j\b\1\1\b\x\j\2\n\s\r\3\w\1\r\0\p\7\l\w\c\l\h\t\7\8\9\4\l\n\u\p\4\s\q\9\c\i\x\h\d\y\q\3\y\n\h\n\h\m\l\j\t\5\8\7\k\c\a\i\v\s\w\8\f\z\h\s\g\c\o\r\7\h\i\a\0\n\p\3\j\5\q\1\l\1\g\i\q\c\7\6\f\c\n\4\8\6\m\3\8\y\e\r\f\3\0\h\0\e\a\5\u\s\6\6\k\f\s\1\5\j\1\f\p\x\1\9\3\j\h\z\b\u\i\1\w\9\k\l\n\m\n\k\3\6\j\u\2\f\9\r\e\g\8\h\1\m\v\m\5\q\s\h\f\d\i\u\g\s\m\9\b\y\s\h\z\1\r\t\q\3\n\o\q\7\i\t\m\d\r\k\a\w\g\6\v\e\7\1\k\e\l\z\p\b\j\k\7\y\m\r\y\7\c\s\j\l\j\n\x\r\e\h\4\b\n\4\e\b\s\r\h\n\0\o\f\d\z\e\s\0\z\o\1\7\l\x\3\a\x\9\t\x\u\z\1\p\z\d\1\l\t\b\3\j\5\r\x\k\k\c\6\y\f\o ]] 00:18:38.953 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:38.953 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:39.249 [2024-11-20 07:16:03.186120] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:39.249 [2024-11-20 07:16:03.186186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59701 ] 00:18:39.249 [2024-11-20 07:16:03.326044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.249 [2024-11-20 07:16:03.361676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.249 [2024-11-20 07:16:03.392088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:39.249  [2024-11-20T07:16:03.711Z] Copying: 512/512 [B] (average 166 kBps) 00:18:39.508 00:18:39.508 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hto7kq8k1x2hzdvclkk8azgyw17kejkh91gqyggj3ooqr7ihwohx0z75zwkogu7zj0d53cpbwyl9u3sgwjuzjwh3gsd3an6v36yz5r2ojvqvzur5hshc9u5116sv6tkkxa9k7c11g7oe0mgwzxkhke161vdv8i2lb4hkpvkp849hjipxfk9o7hchuro8hyudncm7ywp3ty4bhuanm7d8x63ssyayfb2v3bm0dgkhaccbjb11bxj2nsr3w1r0p7lwclht7894lnup4sq9cixhdyq3ynhnhmljt587kcaivsw8fzhsgcor7hia0np3j5q1l1giqc76fcn486m38yerf30h0ea5us66kfs15j1fpx193jhzbui1w9klnmnk36ju2f9reg8h1mvm5qshfdiugsm9byshz1rtq3noq7itmdrkawg6ve71kelzpbjk7ymry7csjljnxreh4bn4ebsrhn0ofdzes0zo17lx3ax9txuz1pzd1ltb3j5rxkkc6yfo == \h\t\o\7\k\q\8\k\1\x\2\h\z\d\v\c\l\k\k\8\a\z\g\y\w\1\7\k\e\j\k\h\9\1\g\q\y\g\g\j\3\o\o\q\r\7\i\h\w\o\h\x\0\z\7\5\z\w\k\o\g\u\7\z\j\0\d\5\3\c\p\b\w\y\l\9\u\3\s\g\w\j\u\z\j\w\h\3\g\s\d\3\a\n\6\v\3\6\y\z\5\r\2\o\j\v\q\v\z\u\r\5\h\s\h\c\9\u\5\1\1\6\s\v\6\t\k\k\x\a\9\k\7\c\1\1\g\7\o\e\0\m\g\w\z\x\k\h\k\e\1\6\1\v\d\v\8\i\2\l\b\4\h\k\p\v\k\p\8\4\9\h\j\i\p\x\f\k\9\o\7\h\c\h\u\r\o\8\h\y\u\d\n\c\m\7\y\w\p\3\t\y\4\b\h\u\a\n\m\7\d\8\x\6\3\s\s\y\a\y\f\b\2\v\3\b\m\0\d\g\k\h\a\c\c\b\j\b\1\1\b\x\j\2\n\s\r\3\w\1\r\0\p\7\l\w\c\l\h\t\7\8\9\4\l\n\u\p\4\s\q\9\c\i\x\h\d\y\q\3\y\n\h\n\h\m\l\j\t\5\8\7\k\c\a\i\v\s\w\8\f\z\h\s\g\c\o\r\7\h\i\a\0\n\p\3\j\5\q\1\l\1\g\i\q\c\7\6\f\c\n\4\8\6\m\3\8\y\e\r\f\3\0\h\0\e\a\5\u\s\6\6\k\f\s\1\5\j\1\f\p\x\1\9\3\j\h\z\b\u\i\1\w\9\k\l\n\m\n\k\3\6\j\u\2\f\9\r\e\g\8\h\1\m\v\m\5\q\s\h\f\d\i\u\g\s\m\9\b\y\s\h\z\1\r\t\q\3\n\o\q\7\i\t\m\d\r\k\a\w\g\6\v\e\7\1\k\e\l\z\p\b\j\k\7\y\m\r\y\7\c\s\j\l\j\n\x\r\e\h\4\b\n\4\e\b\s\r\h\n\0\o\f\d\z\e\s\0\z\o\1\7\l\x\3\a\x\9\t\x\u\z\1\p\z\d\1\l\t\b\3\j\5\r\x\k\k\c\6\y\f\o ]] 00:18:39.508 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:39.508 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:18:39.508 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:18:39.508 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:39.508 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:39.508 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:39.508 [2024-11-20 07:16:03.584580] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:39.508 [2024-11-20 07:16:03.584644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59709 ] 00:18:39.766 [2024-11-20 07:16:03.722411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.766 [2024-11-20 07:16:03.757169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.766 [2024-11-20 07:16:03.786917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:39.766  [2024-11-20T07:16:03.969Z] Copying: 512/512 [B] (average 500 kBps) 00:18:39.766 00:18:39.766 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ r8krijabt4aiuckq1cgz18gd0dx3b53tnsmyzb5vqw74ndtkqina744uijqgbfaa2daou51frgo55lteai3gwq2uah1u8x62j0q6g12aagtl5oxsd79as1fabffr5adr8fn3niw4bd95dt0s5z0phi8m5v5ptzkqp03u0276hsanjnrc154shwqt250eki5l04ba6lxwlh0wl1n1eyqo509gxloqeg4ixoksyjuvjri09nee2njh1ezgo2eywzcufuvf7b3sjvi9122fg7vwtbgcs13hbq1nwjse3r2xd71bdplt8zq5odywl6lckv8cv16v047jwam008x2yggdzeweehbo340f9l8ef3scskbx4jloi8onoon85py4p8p65f053eameahmoft8va65eeqfwzlxlezaamp2dgg0rbz5rcqy4czzbeio7mclwidnzah0ktwi9k8hye8te09kqp9x23q7jtso9jo3mco1c14v8m469dzv16gxv0u30crj == \r\8\k\r\i\j\a\b\t\4\a\i\u\c\k\q\1\c\g\z\1\8\g\d\0\d\x\3\b\5\3\t\n\s\m\y\z\b\5\v\q\w\7\4\n\d\t\k\q\i\n\a\7\4\4\u\i\j\q\g\b\f\a\a\2\d\a\o\u\5\1\f\r\g\o\5\5\l\t\e\a\i\3\g\w\q\2\u\a\h\1\u\8\x\6\2\j\0\q\6\g\1\2\a\a\g\t\l\5\o\x\s\d\7\9\a\s\1\f\a\b\f\f\r\5\a\d\r\8\f\n\3\n\i\w\4\b\d\9\5\d\t\0\s\5\z\0\p\h\i\8\m\5\v\5\p\t\z\k\q\p\0\3\u\0\2\7\6\h\s\a\n\j\n\r\c\1\5\4\s\h\w\q\t\2\5\0\e\k\i\5\l\0\4\b\a\6\l\x\w\l\h\0\w\l\1\n\1\e\y\q\o\5\0\9\g\x\l\o\q\e\g\4\i\x\o\k\s\y\j\u\v\j\r\i\0\9\n\e\e\2\n\j\h\1\e\z\g\o\2\e\y\w\z\c\u\f\u\v\f\7\b\3\s\j\v\i\9\1\2\2\f\g\7\v\w\t\b\g\c\s\1\3\h\b\q\1\n\w\j\s\e\3\r\2\x\d\7\1\b\d\p\l\t\8\z\q\5\o\d\y\w\l\6\l\c\k\v\8\c\v\1\6\v\0\4\7\j\w\a\m\0\0\8\x\2\y\g\g\d\z\e\w\e\e\h\b\o\3\4\0\f\9\l\8\e\f\3\s\c\s\k\b\x\4\j\l\o\i\8\o\n\o\o\n\8\5\p\y\4\p\8\p\6\5\f\0\5\3\e\a\m\e\a\h\m\o\f\t\8\v\a\6\5\e\e\q\f\w\z\l\x\l\e\z\a\a\m\p\2\d\g\g\0\r\b\z\5\r\c\q\y\4\c\z\z\b\e\i\o\7\m\c\l\w\i\d\n\z\a\h\0\k\t\w\i\9\k\8\h\y\e\8\t\e\0\9\k\q\p\9\x\2\3\q\7\j\t\s\o\9\j\o\3\m\c\o\1\c\1\4\v\8\m\4\6\9\d\z\v\1\6\g\x\v\0\u\3\0\c\r\j ]] 00:18:39.766 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:39.766 07:16:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:39.766 [2024-11-20 07:16:03.962944] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:39.766 [2024-11-20 07:16:03.963009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59716 ] 00:18:40.026 [2024-11-20 07:16:04.112770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.026 [2024-11-20 07:16:04.147139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.026 [2024-11-20 07:16:04.177452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:40.026  [2024-11-20T07:16:04.487Z] Copying: 512/512 [B] (average 500 kBps) 00:18:40.284 00:18:40.284 07:16:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ r8krijabt4aiuckq1cgz18gd0dx3b53tnsmyzb5vqw74ndtkqina744uijqgbfaa2daou51frgo55lteai3gwq2uah1u8x62j0q6g12aagtl5oxsd79as1fabffr5adr8fn3niw4bd95dt0s5z0phi8m5v5ptzkqp03u0276hsanjnrc154shwqt250eki5l04ba6lxwlh0wl1n1eyqo509gxloqeg4ixoksyjuvjri09nee2njh1ezgo2eywzcufuvf7b3sjvi9122fg7vwtbgcs13hbq1nwjse3r2xd71bdplt8zq5odywl6lckv8cv16v047jwam008x2yggdzeweehbo340f9l8ef3scskbx4jloi8onoon85py4p8p65f053eameahmoft8va65eeqfwzlxlezaamp2dgg0rbz5rcqy4czzbeio7mclwidnzah0ktwi9k8hye8te09kqp9x23q7jtso9jo3mco1c14v8m469dzv16gxv0u30crj == \r\8\k\r\i\j\a\b\t\4\a\i\u\c\k\q\1\c\g\z\1\8\g\d\0\d\x\3\b\5\3\t\n\s\m\y\z\b\5\v\q\w\7\4\n\d\t\k\q\i\n\a\7\4\4\u\i\j\q\g\b\f\a\a\2\d\a\o\u\5\1\f\r\g\o\5\5\l\t\e\a\i\3\g\w\q\2\u\a\h\1\u\8\x\6\2\j\0\q\6\g\1\2\a\a\g\t\l\5\o\x\s\d\7\9\a\s\1\f\a\b\f\f\r\5\a\d\r\8\f\n\3\n\i\w\4\b\d\9\5\d\t\0\s\5\z\0\p\h\i\8\m\5\v\5\p\t\z\k\q\p\0\3\u\0\2\7\6\h\s\a\n\j\n\r\c\1\5\4\s\h\w\q\t\2\5\0\e\k\i\5\l\0\4\b\a\6\l\x\w\l\h\0\w\l\1\n\1\e\y\q\o\5\0\9\g\x\l\o\q\e\g\4\i\x\o\k\s\y\j\u\v\j\r\i\0\9\n\e\e\2\n\j\h\1\e\z\g\o\2\e\y\w\z\c\u\f\u\v\f\7\b\3\s\j\v\i\9\1\2\2\f\g\7\v\w\t\b\g\c\s\1\3\h\b\q\1\n\w\j\s\e\3\r\2\x\d\7\1\b\d\p\l\t\8\z\q\5\o\d\y\w\l\6\l\c\k\v\8\c\v\1\6\v\0\4\7\j\w\a\m\0\0\8\x\2\y\g\g\d\z\e\w\e\e\h\b\o\3\4\0\f\9\l\8\e\f\3\s\c\s\k\b\x\4\j\l\o\i\8\o\n\o\o\n\8\5\p\y\4\p\8\p\6\5\f\0\5\3\e\a\m\e\a\h\m\o\f\t\8\v\a\6\5\e\e\q\f\w\z\l\x\l\e\z\a\a\m\p\2\d\g\g\0\r\b\z\5\r\c\q\y\4\c\z\z\b\e\i\o\7\m\c\l\w\i\d\n\z\a\h\0\k\t\w\i\9\k\8\h\y\e\8\t\e\0\9\k\q\p\9\x\2\3\q\7\j\t\s\o\9\j\o\3\m\c\o\1\c\1\4\v\8\m\4\6\9\d\z\v\1\6\g\x\v\0\u\3\0\c\r\j ]] 00:18:40.284 07:16:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:40.284 07:16:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:40.284 [2024-11-20 07:16:04.352598] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:40.284 [2024-11-20 07:16:04.352655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59718 ] 00:18:40.542 [2024-11-20 07:16:04.491930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.542 [2024-11-20 07:16:04.527765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.542 [2024-11-20 07:16:04.559575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:40.542  [2024-11-20T07:16:04.745Z] Copying: 512/512 [B] (average 250 kBps) 00:18:40.542 00:18:40.542 07:16:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ r8krijabt4aiuckq1cgz18gd0dx3b53tnsmyzb5vqw74ndtkqina744uijqgbfaa2daou51frgo55lteai3gwq2uah1u8x62j0q6g12aagtl5oxsd79as1fabffr5adr8fn3niw4bd95dt0s5z0phi8m5v5ptzkqp03u0276hsanjnrc154shwqt250eki5l04ba6lxwlh0wl1n1eyqo509gxloqeg4ixoksyjuvjri09nee2njh1ezgo2eywzcufuvf7b3sjvi9122fg7vwtbgcs13hbq1nwjse3r2xd71bdplt8zq5odywl6lckv8cv16v047jwam008x2yggdzeweehbo340f9l8ef3scskbx4jloi8onoon85py4p8p65f053eameahmoft8va65eeqfwzlxlezaamp2dgg0rbz5rcqy4czzbeio7mclwidnzah0ktwi9k8hye8te09kqp9x23q7jtso9jo3mco1c14v8m469dzv16gxv0u30crj == \r\8\k\r\i\j\a\b\t\4\a\i\u\c\k\q\1\c\g\z\1\8\g\d\0\d\x\3\b\5\3\t\n\s\m\y\z\b\5\v\q\w\7\4\n\d\t\k\q\i\n\a\7\4\4\u\i\j\q\g\b\f\a\a\2\d\a\o\u\5\1\f\r\g\o\5\5\l\t\e\a\i\3\g\w\q\2\u\a\h\1\u\8\x\6\2\j\0\q\6\g\1\2\a\a\g\t\l\5\o\x\s\d\7\9\a\s\1\f\a\b\f\f\r\5\a\d\r\8\f\n\3\n\i\w\4\b\d\9\5\d\t\0\s\5\z\0\p\h\i\8\m\5\v\5\p\t\z\k\q\p\0\3\u\0\2\7\6\h\s\a\n\j\n\r\c\1\5\4\s\h\w\q\t\2\5\0\e\k\i\5\l\0\4\b\a\6\l\x\w\l\h\0\w\l\1\n\1\e\y\q\o\5\0\9\g\x\l\o\q\e\g\4\i\x\o\k\s\y\j\u\v\j\r\i\0\9\n\e\e\2\n\j\h\1\e\z\g\o\2\e\y\w\z\c\u\f\u\v\f\7\b\3\s\j\v\i\9\1\2\2\f\g\7\v\w\t\b\g\c\s\1\3\h\b\q\1\n\w\j\s\e\3\r\2\x\d\7\1\b\d\p\l\t\8\z\q\5\o\d\y\w\l\6\l\c\k\v\8\c\v\1\6\v\0\4\7\j\w\a\m\0\0\8\x\2\y\g\g\d\z\e\w\e\e\h\b\o\3\4\0\f\9\l\8\e\f\3\s\c\s\k\b\x\4\j\l\o\i\8\o\n\o\o\n\8\5\p\y\4\p\8\p\6\5\f\0\5\3\e\a\m\e\a\h\m\o\f\t\8\v\a\6\5\e\e\q\f\w\z\l\x\l\e\z\a\a\m\p\2\d\g\g\0\r\b\z\5\r\c\q\y\4\c\z\z\b\e\i\o\7\m\c\l\w\i\d\n\z\a\h\0\k\t\w\i\9\k\8\h\y\e\8\t\e\0\9\k\q\p\9\x\2\3\q\7\j\t\s\o\9\j\o\3\m\c\o\1\c\1\4\v\8\m\4\6\9\d\z\v\1\6\g\x\v\0\u\3\0\c\r\j ]] 00:18:40.542 07:16:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:40.542 07:16:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:40.542 [2024-11-20 07:16:04.739444] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:40.801 [2024-11-20 07:16:04.739921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59730 ] 00:18:40.801 [2024-11-20 07:16:04.880473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.801 [2024-11-20 07:16:04.915979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.801 [2024-11-20 07:16:04.945986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:40.801  [2024-11-20T07:16:05.263Z] Copying: 512/512 [B] (average 500 kBps) 00:18:41.060 00:18:41.060 07:16:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ r8krijabt4aiuckq1cgz18gd0dx3b53tnsmyzb5vqw74ndtkqina744uijqgbfaa2daou51frgo55lteai3gwq2uah1u8x62j0q6g12aagtl5oxsd79as1fabffr5adr8fn3niw4bd95dt0s5z0phi8m5v5ptzkqp03u0276hsanjnrc154shwqt250eki5l04ba6lxwlh0wl1n1eyqo509gxloqeg4ixoksyjuvjri09nee2njh1ezgo2eywzcufuvf7b3sjvi9122fg7vwtbgcs13hbq1nwjse3r2xd71bdplt8zq5odywl6lckv8cv16v047jwam008x2yggdzeweehbo340f9l8ef3scskbx4jloi8onoon85py4p8p65f053eameahmoft8va65eeqfwzlxlezaamp2dgg0rbz5rcqy4czzbeio7mclwidnzah0ktwi9k8hye8te09kqp9x23q7jtso9jo3mco1c14v8m469dzv16gxv0u30crj == \r\8\k\r\i\j\a\b\t\4\a\i\u\c\k\q\1\c\g\z\1\8\g\d\0\d\x\3\b\5\3\t\n\s\m\y\z\b\5\v\q\w\7\4\n\d\t\k\q\i\n\a\7\4\4\u\i\j\q\g\b\f\a\a\2\d\a\o\u\5\1\f\r\g\o\5\5\l\t\e\a\i\3\g\w\q\2\u\a\h\1\u\8\x\6\2\j\0\q\6\g\1\2\a\a\g\t\l\5\o\x\s\d\7\9\a\s\1\f\a\b\f\f\r\5\a\d\r\8\f\n\3\n\i\w\4\b\d\9\5\d\t\0\s\5\z\0\p\h\i\8\m\5\v\5\p\t\z\k\q\p\0\3\u\0\2\7\6\h\s\a\n\j\n\r\c\1\5\4\s\h\w\q\t\2\5\0\e\k\i\5\l\0\4\b\a\6\l\x\w\l\h\0\w\l\1\n\1\e\y\q\o\5\0\9\g\x\l\o\q\e\g\4\i\x\o\k\s\y\j\u\v\j\r\i\0\9\n\e\e\2\n\j\h\1\e\z\g\o\2\e\y\w\z\c\u\f\u\v\f\7\b\3\s\j\v\i\9\1\2\2\f\g\7\v\w\t\b\g\c\s\1\3\h\b\q\1\n\w\j\s\e\3\r\2\x\d\7\1\b\d\p\l\t\8\z\q\5\o\d\y\w\l\6\l\c\k\v\8\c\v\1\6\v\0\4\7\j\w\a\m\0\0\8\x\2\y\g\g\d\z\e\w\e\e\h\b\o\3\4\0\f\9\l\8\e\f\3\s\c\s\k\b\x\4\j\l\o\i\8\o\n\o\o\n\8\5\p\y\4\p\8\p\6\5\f\0\5\3\e\a\m\e\a\h\m\o\f\t\8\v\a\6\5\e\e\q\f\w\z\l\x\l\e\z\a\a\m\p\2\d\g\g\0\r\b\z\5\r\c\q\y\4\c\z\z\b\e\i\o\7\m\c\l\w\i\d\n\z\a\h\0\k\t\w\i\9\k\8\h\y\e\8\t\e\0\9\k\q\p\9\x\2\3\q\7\j\t\s\o\9\j\o\3\m\c\o\1\c\1\4\v\8\m\4\6\9\d\z\v\1\6\g\x\v\0\u\3\0\c\r\j ]] 00:18:41.060 00:18:41.060 real 0m3.100s 00:18:41.060 user 0m1.496s 00:18:41.060 sys 0m0.627s 00:18:41.060 07:16:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.060 07:16:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:18:41.060 ************************************ 00:18:41.060 END TEST dd_flags_misc_forced_aio 00:18:41.060 ************************************ 00:18:41.060 07:16:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:18:41.060 07:16:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:41.060 07:16:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:41.060 ************************************ 00:18:41.060 END TEST spdk_dd_posix 00:18:41.060 ************************************ 00:18:41.060 00:18:41.060 real 0m14.490s 00:18:41.060 user 0m6.065s 00:18:41.060 sys 0m3.698s 00:18:41.060 07:16:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.060 07:16:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:18:41.060 07:16:05 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:18:41.060 07:16:05 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:41.060 07:16:05 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.060 07:16:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:18:41.060 ************************************ 00:18:41.060 START TEST spdk_dd_malloc 00:18:41.060 ************************************ 00:18:41.060 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:18:41.060 * Looking for test storage... 00:18:41.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:41.060 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:41.060 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:41.060 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:41.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.319 --rc genhtml_branch_coverage=1 00:18:41.319 --rc genhtml_function_coverage=1 00:18:41.319 --rc genhtml_legend=1 00:18:41.319 --rc geninfo_all_blocks=1 00:18:41.319 --rc geninfo_unexecuted_blocks=1 00:18:41.319 00:18:41.319 ' 00:18:41.319 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:41.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.319 --rc genhtml_branch_coverage=1 00:18:41.319 --rc genhtml_function_coverage=1 00:18:41.319 --rc genhtml_legend=1 00:18:41.319 --rc geninfo_all_blocks=1 00:18:41.320 --rc geninfo_unexecuted_blocks=1 00:18:41.320 00:18:41.320 ' 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:41.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.320 --rc genhtml_branch_coverage=1 00:18:41.320 --rc genhtml_function_coverage=1 00:18:41.320 --rc genhtml_legend=1 00:18:41.320 --rc geninfo_all_blocks=1 00:18:41.320 --rc geninfo_unexecuted_blocks=1 00:18:41.320 00:18:41.320 ' 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:41.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.320 --rc genhtml_branch_coverage=1 00:18:41.320 --rc genhtml_function_coverage=1 00:18:41.320 --rc genhtml_legend=1 00:18:41.320 --rc geninfo_all_blocks=1 00:18:41.320 --rc geninfo_unexecuted_blocks=1 00:18:41.320 00:18:41.320 ' 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:18:41.320 ************************************ 00:18:41.320 START TEST dd_malloc_copy 00:18:41.320 ************************************ 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:41.320 07:16:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:18:41.320 [2024-11-20 07:16:05.351405] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:41.320 [2024-11-20 07:16:05.351659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59802 ] 00:18:41.320 { 00:18:41.320 "subsystems": [ 00:18:41.320 { 00:18:41.320 "subsystem": "bdev", 00:18:41.320 "config": [ 00:18:41.320 { 00:18:41.320 "params": { 00:18:41.320 "block_size": 512, 00:18:41.320 "num_blocks": 1048576, 00:18:41.320 "name": "malloc0" 00:18:41.320 }, 00:18:41.320 "method": "bdev_malloc_create" 00:18:41.320 }, 00:18:41.320 { 00:18:41.320 "params": { 00:18:41.320 "block_size": 512, 00:18:41.320 "num_blocks": 1048576, 00:18:41.320 "name": "malloc1" 00:18:41.320 }, 00:18:41.320 "method": "bdev_malloc_create" 00:18:41.320 }, 00:18:41.320 { 00:18:41.320 "method": "bdev_wait_for_examine" 00:18:41.320 } 00:18:41.320 ] 00:18:41.320 } 00:18:41.320 ] 00:18:41.320 } 00:18:41.320 [2024-11-20 07:16:05.489765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.578 [2024-11-20 07:16:05.524919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.578 [2024-11-20 07:16:05.555946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:42.988  [2024-11-20T07:16:07.759Z] Copying: 208/512 [MB] (208 MBps) [2024-11-20T07:16:08.324Z] Copying: 416/512 [MB] (208 MBps) [2024-11-20T07:16:08.581Z] Copying: 512/512 [MB] (average 208 MBps) 00:18:44.378 00:18:44.378 07:16:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:18:44.378 07:16:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:18:44.378 07:16:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:44.378 07:16:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:18:44.378 { 00:18:44.379 "subsystems": [ 00:18:44.379 { 00:18:44.379 "subsystem": "bdev", 00:18:44.379 "config": [ 00:18:44.379 { 00:18:44.379 "params": { 00:18:44.379 "block_size": 512, 00:18:44.379 "num_blocks": 1048576, 00:18:44.379 "name": "malloc0" 00:18:44.379 }, 00:18:44.379 "method": "bdev_malloc_create" 00:18:44.379 }, 00:18:44.379 { 00:18:44.379 "params": { 00:18:44.379 "block_size": 512, 00:18:44.379 "num_blocks": 1048576, 00:18:44.379 "name": "malloc1" 00:18:44.379 }, 00:18:44.379 "method": "bdev_malloc_create" 00:18:44.379 }, 00:18:44.379 { 00:18:44.379 "method": "bdev_wait_for_examine" 00:18:44.379 } 00:18:44.379 ] 00:18:44.379 } 00:18:44.379 ] 00:18:44.379 } 00:18:44.379 [2024-11-20 07:16:08.535535] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:44.379 [2024-11-20 07:16:08.535744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59850 ] 00:18:44.636 [2024-11-20 07:16:08.675420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.636 [2024-11-20 07:16:08.710601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.636 [2024-11-20 07:16:08.741178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:46.008  [2024-11-20T07:16:11.220Z] Copying: 210/512 [MB] (210 MBps) [2024-11-20T07:16:11.479Z] Copying: 426/512 [MB] (216 MBps) [2024-11-20T07:16:11.740Z] Copying: 512/512 [MB] (average 220 MBps) 00:18:47.537 00:18:47.537 00:18:47.537 real 0m6.201s 00:18:47.537 user 0m5.547s 00:18:47.537 sys 0m0.463s 00:18:47.537 ************************************ 00:18:47.537 END TEST dd_malloc_copy 00:18:47.537 ************************************ 00:18:47.537 07:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.537 07:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:18:47.537 ************************************ 00:18:47.537 END TEST spdk_dd_malloc 00:18:47.537 ************************************ 00:18:47.537 00:18:47.537 real 0m6.387s 00:18:47.537 user 0m5.655s 00:18:47.537 sys 0m0.545s 00:18:47.537 07:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.537 07:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:18:47.537 07:16:11 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:18:47.537 07:16:11 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:47.537 07:16:11 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.537 07:16:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:18:47.537 ************************************ 00:18:47.537 START TEST spdk_dd_bdev_to_bdev 00:18:47.537 ************************************ 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:18:47.537 * Looking for test storage... 00:18:47.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:47.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.537 --rc genhtml_branch_coverage=1 00:18:47.537 --rc genhtml_function_coverage=1 00:18:47.537 --rc genhtml_legend=1 00:18:47.537 --rc geninfo_all_blocks=1 00:18:47.537 --rc geninfo_unexecuted_blocks=1 00:18:47.537 00:18:47.537 ' 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:47.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.537 --rc genhtml_branch_coverage=1 00:18:47.537 --rc genhtml_function_coverage=1 00:18:47.537 --rc genhtml_legend=1 00:18:47.537 --rc geninfo_all_blocks=1 00:18:47.537 --rc geninfo_unexecuted_blocks=1 00:18:47.537 00:18:47.537 ' 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:47.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.537 --rc genhtml_branch_coverage=1 00:18:47.537 --rc genhtml_function_coverage=1 00:18:47.537 --rc genhtml_legend=1 00:18:47.537 --rc geninfo_all_blocks=1 00:18:47.537 --rc geninfo_unexecuted_blocks=1 00:18:47.537 00:18:47.537 ' 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:47.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.537 --rc genhtml_branch_coverage=1 00:18:47.537 --rc genhtml_function_coverage=1 00:18:47.537 --rc genhtml_legend=1 00:18:47.537 --rc geninfo_all_blocks=1 00:18:47.537 --rc geninfo_unexecuted_blocks=1 00:18:47.537 00:18:47.537 ' 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.537 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.538 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:18:47.796 ************************************ 00:18:47.796 START TEST dd_inflate_file 00:18:47.796 ************************************ 00:18:47.796 07:16:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:18:47.796 [2024-11-20 07:16:11.775527] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:47.796 [2024-11-20 07:16:11.775723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59957 ] 00:18:47.796 [2024-11-20 07:16:11.912645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.796 [2024-11-20 07:16:11.943791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.796 [2024-11-20 07:16:11.972125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:48.053  [2024-11-20T07:16:12.256Z] Copying: 64/64 [MB] (average 2000 MBps) 00:18:48.053 00:18:48.053 00:18:48.053 real 0m0.376s 00:18:48.053 user 0m0.193s 00:18:48.053 sys 0m0.180s 00:18:48.053 07:16:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.053 ************************************ 00:18:48.053 END TEST dd_inflate_file 00:18:48.053 ************************************ 00:18:48.053 07:16:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:18:48.053 07:16:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:18:48.053 07:16:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:18:48.053 07:16:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:18:48.053 07:16:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:18:48.053 07:16:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.053 07:16:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:18:48.053 07:16:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:18:48.053 07:16:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:18:48.053 07:16:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:18:48.053 ************************************ 00:18:48.053 START TEST dd_copy_to_out_bdev 00:18:48.053 ************************************ 00:18:48.053 07:16:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:18:48.053 { 00:18:48.053 "subsystems": [ 00:18:48.053 { 00:18:48.053 "subsystem": "bdev", 00:18:48.053 "config": [ 00:18:48.053 { 00:18:48.053 "params": { 00:18:48.053 "trtype": "pcie", 00:18:48.053 "traddr": "0000:00:10.0", 00:18:48.053 "name": "Nvme0" 00:18:48.053 }, 00:18:48.053 "method": "bdev_nvme_attach_controller" 00:18:48.053 }, 00:18:48.053 { 00:18:48.053 "params": { 00:18:48.053 "trtype": "pcie", 00:18:48.053 "traddr": "0000:00:11.0", 00:18:48.053 "name": "Nvme1" 00:18:48.053 }, 00:18:48.053 "method": "bdev_nvme_attach_controller" 00:18:48.053 }, 00:18:48.053 { 00:18:48.053 "method": "bdev_wait_for_examine" 00:18:48.053 } 00:18:48.053 ] 00:18:48.053 } 00:18:48.053 ] 00:18:48.053 } 00:18:48.053 [2024-11-20 07:16:12.196741] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:48.053 [2024-11-20 07:16:12.196804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59990 ] 00:18:48.311 [2024-11-20 07:16:12.331147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.311 [2024-11-20 07:16:12.362438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.311 [2024-11-20 07:16:12.391043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:49.243  [2024-11-20T07:16:13.446Z] Copying: 64/64 [MB] (average 94 MBps) 00:18:49.243 00:18:49.243 00:18:49.243 real 0m1.230s 00:18:49.243 user 0m1.036s 00:18:49.243 sys 0m0.936s 00:18:49.243 ************************************ 00:18:49.243 END TEST dd_copy_to_out_bdev 00:18:49.243 ************************************ 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:18:49.243 ************************************ 00:18:49.243 START TEST dd_offset_magic 00:18:49.243 ************************************ 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:18:49.243 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:18:49.244 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:18:49.244 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:18:49.244 07:16:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:18:49.502 [2024-11-20 07:16:13.464308] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:49.502 [2024-11-20 07:16:13.464784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60024 ] 00:18:49.502 { 00:18:49.502 "subsystems": [ 00:18:49.502 { 00:18:49.502 "subsystem": "bdev", 00:18:49.502 "config": [ 00:18:49.502 { 00:18:49.502 "params": { 00:18:49.502 "trtype": "pcie", 00:18:49.502 "traddr": "0000:00:10.0", 00:18:49.502 "name": "Nvme0" 00:18:49.502 }, 00:18:49.502 "method": "bdev_nvme_attach_controller" 00:18:49.502 }, 00:18:49.502 { 00:18:49.502 "params": { 00:18:49.502 "trtype": "pcie", 00:18:49.502 "traddr": "0000:00:11.0", 00:18:49.502 "name": "Nvme1" 00:18:49.502 }, 00:18:49.502 "method": "bdev_nvme_attach_controller" 00:18:49.502 }, 00:18:49.502 { 00:18:49.502 "method": "bdev_wait_for_examine" 00:18:49.502 } 00:18:49.502 ] 00:18:49.502 } 00:18:49.502 ] 00:18:49.502 } 00:18:49.502 [2024-11-20 07:16:13.602061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.502 [2024-11-20 07:16:13.637997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.502 [2024-11-20 07:16:13.669545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:49.772  [2024-11-20T07:16:14.243Z] Copying: 65/65 [MB] (average 928 MBps) 00:18:50.040 00:18:50.040 07:16:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:18:50.040 07:16:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:18:50.040 07:16:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:18:50.040 07:16:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:18:50.040 [2024-11-20 07:16:14.087167] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:50.040 [2024-11-20 07:16:14.087246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60044 ] 00:18:50.040 { 00:18:50.040 "subsystems": [ 00:18:50.040 { 00:18:50.040 "subsystem": "bdev", 00:18:50.040 "config": [ 00:18:50.040 { 00:18:50.040 "params": { 00:18:50.040 "trtype": "pcie", 00:18:50.040 "traddr": "0000:00:10.0", 00:18:50.040 "name": "Nvme0" 00:18:50.040 }, 00:18:50.040 "method": "bdev_nvme_attach_controller" 00:18:50.040 }, 00:18:50.040 { 00:18:50.040 "params": { 00:18:50.040 "trtype": "pcie", 00:18:50.040 "traddr": "0000:00:11.0", 00:18:50.040 "name": "Nvme1" 00:18:50.040 }, 00:18:50.040 "method": "bdev_nvme_attach_controller" 00:18:50.040 }, 00:18:50.040 { 00:18:50.040 "method": "bdev_wait_for_examine" 00:18:50.040 } 00:18:50.040 ] 00:18:50.040 } 00:18:50.040 ] 00:18:50.040 } 00:18:50.040 [2024-11-20 07:16:14.224249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.300 [2024-11-20 07:16:14.254911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.300 [2024-11-20 07:16:14.283977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:50.300  [2024-11-20T07:16:14.761Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:18:50.558 00:18:50.558 07:16:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:18:50.558 07:16:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:18:50.558 07:16:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:18:50.558 07:16:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:18:50.558 07:16:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:18:50.558 07:16:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:18:50.558 07:16:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:18:50.558 [2024-11-20 07:16:14.574400] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:50.558 [2024-11-20 07:16:14.574463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:18:50.558 { 00:18:50.558 "subsystems": [ 00:18:50.558 { 00:18:50.558 "subsystem": "bdev", 00:18:50.558 "config": [ 00:18:50.558 { 00:18:50.558 "params": { 00:18:50.558 "trtype": "pcie", 00:18:50.558 "traddr": "0000:00:10.0", 00:18:50.558 "name": "Nvme0" 00:18:50.558 }, 00:18:50.558 "method": "bdev_nvme_attach_controller" 00:18:50.558 }, 00:18:50.558 { 00:18:50.558 "params": { 00:18:50.558 "trtype": "pcie", 00:18:50.558 "traddr": "0000:00:11.0", 00:18:50.558 "name": "Nvme1" 00:18:50.558 }, 00:18:50.558 "method": "bdev_nvme_attach_controller" 00:18:50.558 }, 00:18:50.558 { 00:18:50.558 "method": "bdev_wait_for_examine" 00:18:50.558 } 00:18:50.558 ] 00:18:50.558 } 00:18:50.558 ] 00:18:50.558 } 00:18:50.558 [2024-11-20 07:16:14.709201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.558 [2024-11-20 07:16:14.739693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.818 [2024-11-20 07:16:14.767958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:50.818  [2024-11-20T07:16:15.279Z] Copying: 65/65 [MB] (average 1015 MBps) 00:18:51.076 00:18:51.076 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:18:51.076 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:18:51.076 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:18:51.076 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:18:51.076 [2024-11-20 07:16:15.197235] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:51.076 [2024-11-20 07:16:15.197299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60075 ] 00:18:51.076 { 00:18:51.076 "subsystems": [ 00:18:51.076 { 00:18:51.076 "subsystem": "bdev", 00:18:51.076 "config": [ 00:18:51.076 { 00:18:51.076 "params": { 00:18:51.076 "trtype": "pcie", 00:18:51.076 "traddr": "0000:00:10.0", 00:18:51.076 "name": "Nvme0" 00:18:51.076 }, 00:18:51.076 "method": "bdev_nvme_attach_controller" 00:18:51.076 }, 00:18:51.076 { 00:18:51.076 "params": { 00:18:51.076 "trtype": "pcie", 00:18:51.076 "traddr": "0000:00:11.0", 00:18:51.076 "name": "Nvme1" 00:18:51.076 }, 00:18:51.076 "method": "bdev_nvme_attach_controller" 00:18:51.076 }, 00:18:51.076 { 00:18:51.076 "method": "bdev_wait_for_examine" 00:18:51.076 } 00:18:51.076 ] 00:18:51.076 } 00:18:51.076 ] 00:18:51.076 } 00:18:51.334 [2024-11-20 07:16:15.339553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.334 [2024-11-20 07:16:15.375521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.334 [2024-11-20 07:16:15.407549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:51.593  [2024-11-20T07:16:15.796Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:51.593 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:18:51.593 00:18:51.593 real 0m2.250s 00:18:51.593 user 0m1.600s 00:18:51.593 sys 0m0.539s 00:18:51.593 ************************************ 00:18:51.593 END TEST dd_offset_magic 00:18:51.593 ************************************ 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:18:51.593 07:16:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:18:51.593 [2024-11-20 07:16:15.746991] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:51.593 [2024-11-20 07:16:15.747215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60107 ] 00:18:51.593 { 00:18:51.593 "subsystems": [ 00:18:51.593 { 00:18:51.593 "subsystem": "bdev", 00:18:51.593 "config": [ 00:18:51.593 { 00:18:51.593 "params": { 00:18:51.593 "trtype": "pcie", 00:18:51.593 "traddr": "0000:00:10.0", 00:18:51.593 "name": "Nvme0" 00:18:51.593 }, 00:18:51.593 "method": "bdev_nvme_attach_controller" 00:18:51.593 }, 00:18:51.593 { 00:18:51.593 "params": { 00:18:51.593 "trtype": "pcie", 00:18:51.593 "traddr": "0000:00:11.0", 00:18:51.593 "name": "Nvme1" 00:18:51.593 }, 00:18:51.593 "method": "bdev_nvme_attach_controller" 00:18:51.593 }, 00:18:51.593 { 00:18:51.593 "method": "bdev_wait_for_examine" 00:18:51.593 } 00:18:51.593 ] 00:18:51.593 } 00:18:51.593 ] 00:18:51.593 } 00:18:51.851 [2024-11-20 07:16:15.887018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.851 [2024-11-20 07:16:15.922477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.851 [2024-11-20 07:16:15.954039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:52.108  [2024-11-20T07:16:16.311Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:18:52.108 00:18:52.108 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:18:52.109 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:18:52.109 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:18:52.109 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:18:52.109 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:18:52.109 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:18:52.109 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:18:52.109 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:18:52.109 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:18:52.109 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:18:52.109 [2024-11-20 07:16:16.256436] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:52.109 [2024-11-20 07:16:16.256495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:18:52.109 { 00:18:52.109 "subsystems": [ 00:18:52.109 { 00:18:52.109 "subsystem": "bdev", 00:18:52.109 "config": [ 00:18:52.109 { 00:18:52.109 "params": { 00:18:52.109 "trtype": "pcie", 00:18:52.109 "traddr": "0000:00:10.0", 00:18:52.109 "name": "Nvme0" 00:18:52.109 }, 00:18:52.109 "method": "bdev_nvme_attach_controller" 00:18:52.109 }, 00:18:52.109 { 00:18:52.109 "params": { 00:18:52.109 "trtype": "pcie", 00:18:52.109 "traddr": "0000:00:11.0", 00:18:52.109 "name": "Nvme1" 00:18:52.109 }, 00:18:52.109 "method": "bdev_nvme_attach_controller" 00:18:52.109 }, 00:18:52.109 { 00:18:52.109 "method": "bdev_wait_for_examine" 00:18:52.109 } 00:18:52.109 ] 00:18:52.109 } 00:18:52.109 ] 00:18:52.109 } 00:18:52.365 [2024-11-20 07:16:16.394595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.365 [2024-11-20 07:16:16.431168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.366 [2024-11-20 07:16:16.463492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:52.623  [2024-11-20T07:16:16.826Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:18:52.623 00:18:52.623 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:18:52.623 ************************************ 00:18:52.623 END TEST spdk_dd_bdev_to_bdev 00:18:52.623 ************************************ 00:18:52.623 00:18:52.623 real 0m5.172s 00:18:52.623 user 0m3.662s 00:18:52.623 sys 0m2.135s 00:18:52.623 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.623 07:16:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:18:52.623 07:16:16 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:18:52.623 07:16:16 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:18:52.623 07:16:16 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:52.623 07:16:16 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.623 07:16:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:18:52.623 ************************************ 00:18:52.623 START TEST spdk_dd_uring 00:18:52.623 ************************************ 00:18:52.623 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:18:52.881 * Looking for test storage... 00:18:52.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:18:52.881 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:52.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.882 --rc genhtml_branch_coverage=1 00:18:52.882 --rc genhtml_function_coverage=1 00:18:52.882 --rc genhtml_legend=1 00:18:52.882 --rc geninfo_all_blocks=1 00:18:52.882 --rc geninfo_unexecuted_blocks=1 00:18:52.882 00:18:52.882 ' 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:52.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.882 --rc genhtml_branch_coverage=1 00:18:52.882 --rc genhtml_function_coverage=1 00:18:52.882 --rc genhtml_legend=1 00:18:52.882 --rc geninfo_all_blocks=1 00:18:52.882 --rc geninfo_unexecuted_blocks=1 00:18:52.882 00:18:52.882 ' 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:52.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.882 --rc genhtml_branch_coverage=1 00:18:52.882 --rc genhtml_function_coverage=1 00:18:52.882 --rc genhtml_legend=1 00:18:52.882 --rc geninfo_all_blocks=1 00:18:52.882 --rc geninfo_unexecuted_blocks=1 00:18:52.882 00:18:52.882 ' 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:52.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.882 --rc genhtml_branch_coverage=1 00:18:52.882 --rc genhtml_function_coverage=1 00:18:52.882 --rc genhtml_legend=1 00:18:52.882 --rc geninfo_all_blocks=1 00:18:52.882 --rc geninfo_unexecuted_blocks=1 00:18:52.882 00:18:52.882 ' 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:18:52.882 ************************************ 00:18:52.882 START TEST dd_uring_copy 00:18:52.882 ************************************ 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=bxp8kh91r9ffl5pmdzwqlxo11k888lldb8tfeqhd8nod8rxe00uemoee0bgvdrmme3pcnz2oebkhpnyfgvw616153b6p0y0uk2zkqlg9oqzos39kesqaepy5wjyzrmnzvbd61yn0ezxn63rbbht4xm6bidpmss5sbyqxkbuo61migkd5ha4tsbvos1pcslpz1hbnixx93majd7ndlwb3mx6oxbqr868lii83kkl23jsk3vt60eb2utab7jrb4x8y9389x4ky99apf4mdc9ysf7xee0mw0jktasfi2ong6a4rr4vgxkv5t6pcbe3etce48sobvrsv52azlkxgojpyvgq8c8t0t1pkl3zzlspnpxronkfglf1w4521o53pl233mra1zoik7fqjp9bdcsabbo2fsip418a6pl8xsbqh056cta429aiet49hpblvfyh6i6vp3q1whw11vh32yti2m3lh6dsn3wf4uva61grdhe6advbnjtjcee3la67b8xvxagp5cdrzimf8vil5ht8zbvadisox1kju1glg08vavhzmnk0049lt3kuogejnr7o7ghlfbkgz0wefnafp6p8i6oyix5r8m500ocffuyk79lxb8etl9nstfeonconihm5fklqut06srgfslwpkg1ysth977a0aixreqtl5ijo2qxov5zcub15ptz4dignvv9d37w4wzpxgqcmvz7q3i50j2j81lmtprh7nofobqbpaznxkgrcd8r1obtj4l4gr9v31l7onsdfpa282w53tkcbvjf0m7hwp4yioc9jpmnohmzldangv8l5dmfbue0dt58znkpf7sy0ifji7t6n46dkpw13anwmougcwydsr9vk726r5erqvk871uzi68xol3tln7sg3echk7oy2xhjx56b1gwfrs7bqlkehsv8qh0neantw6bvy1ddhsbcfjvarb3xmwo7wvjjru4m5zro46yaw524bty5ci2wi1eerc7h9xf28alxtff3p2xwuxjntv489 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo bxp8kh91r9ffl5pmdzwqlxo11k888lldb8tfeqhd8nod8rxe00uemoee0bgvdrmme3pcnz2oebkhpnyfgvw616153b6p0y0uk2zkqlg9oqzos39kesqaepy5wjyzrmnzvbd61yn0ezxn63rbbht4xm6bidpmss5sbyqxkbuo61migkd5ha4tsbvos1pcslpz1hbnixx93majd7ndlwb3mx6oxbqr868lii83kkl23jsk3vt60eb2utab7jrb4x8y9389x4ky99apf4mdc9ysf7xee0mw0jktasfi2ong6a4rr4vgxkv5t6pcbe3etce48sobvrsv52azlkxgojpyvgq8c8t0t1pkl3zzlspnpxronkfglf1w4521o53pl233mra1zoik7fqjp9bdcsabbo2fsip418a6pl8xsbqh056cta429aiet49hpblvfyh6i6vp3q1whw11vh32yti2m3lh6dsn3wf4uva61grdhe6advbnjtjcee3la67b8xvxagp5cdrzimf8vil5ht8zbvadisox1kju1glg08vavhzmnk0049lt3kuogejnr7o7ghlfbkgz0wefnafp6p8i6oyix5r8m500ocffuyk79lxb8etl9nstfeonconihm5fklqut06srgfslwpkg1ysth977a0aixreqtl5ijo2qxov5zcub15ptz4dignvv9d37w4wzpxgqcmvz7q3i50j2j81lmtprh7nofobqbpaznxkgrcd8r1obtj4l4gr9v31l7onsdfpa282w53tkcbvjf0m7hwp4yioc9jpmnohmzldangv8l5dmfbue0dt58znkpf7sy0ifji7t6n46dkpw13anwmougcwydsr9vk726r5erqvk871uzi68xol3tln7sg3echk7oy2xhjx56b1gwfrs7bqlkehsv8qh0neantw6bvy1ddhsbcfjvarb3xmwo7wvjjru4m5zro46yaw524bty5ci2wi1eerc7h9xf28alxtff3p2xwuxjntv489 00:18:52.882 07:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:18:52.882 [2024-11-20 07:16:17.006289] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:52.882 [2024-11-20 07:16:17.006481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60195 ] 00:18:53.140 [2024-11-20 07:16:17.146109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.140 [2024-11-20 07:16:17.182633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.140 [2024-11-20 07:16:17.213478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:53.707  [2024-11-20T07:16:17.910Z] Copying: 511/511 [MB] (average 1999 MBps) 00:18:53.707 00:18:53.707 07:16:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:18:53.707 07:16:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:18:53.707 07:16:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:53.707 07:16:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:18:53.707 [2024-11-20 07:16:17.851164] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:53.707 [2024-11-20 07:16:17.851243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60211 ] 00:18:53.707 { 00:18:53.707 "subsystems": [ 00:18:53.707 { 00:18:53.707 "subsystem": "bdev", 00:18:53.707 "config": [ 00:18:53.707 { 00:18:53.707 "params": { 00:18:53.707 "block_size": 512, 00:18:53.707 "num_blocks": 1048576, 00:18:53.707 "name": "malloc0" 00:18:53.707 }, 00:18:53.707 "method": "bdev_malloc_create" 00:18:53.707 }, 00:18:53.707 { 00:18:53.707 "params": { 00:18:53.707 "filename": "/dev/zram1", 00:18:53.707 "name": "uring0" 00:18:53.707 }, 00:18:53.707 "method": "bdev_uring_create" 00:18:53.707 }, 00:18:53.707 { 00:18:53.707 "method": "bdev_wait_for_examine" 00:18:53.707 } 00:18:53.707 ] 00:18:53.707 } 00:18:53.707 ] 00:18:53.707 } 00:18:53.964 [2024-11-20 07:16:17.992459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.964 [2024-11-20 07:16:18.028903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.964 [2024-11-20 07:16:18.060486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:55.338  [2024-11-20T07:16:20.471Z] Copying: 263/512 [MB] (263 MBps) [2024-11-20T07:16:20.471Z] Copying: 512/512 [MB] (average 263 MBps) 00:18:56.268 00:18:56.268 07:16:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:18:56.268 07:16:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:18:56.268 07:16:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:56.268 07:16:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:18:56.268 [2024-11-20 07:16:20.359619] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:56.268 [2024-11-20 07:16:20.359674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60249 ] 00:18:56.268 { 00:18:56.268 "subsystems": [ 00:18:56.268 { 00:18:56.268 "subsystem": "bdev", 00:18:56.268 "config": [ 00:18:56.268 { 00:18:56.268 "params": { 00:18:56.268 "block_size": 512, 00:18:56.268 "num_blocks": 1048576, 00:18:56.268 "name": "malloc0" 00:18:56.268 }, 00:18:56.268 "method": "bdev_malloc_create" 00:18:56.268 }, 00:18:56.268 { 00:18:56.268 "params": { 00:18:56.268 "filename": "/dev/zram1", 00:18:56.268 "name": "uring0" 00:18:56.268 }, 00:18:56.268 "method": "bdev_uring_create" 00:18:56.268 }, 00:18:56.268 { 00:18:56.268 "method": "bdev_wait_for_examine" 00:18:56.268 } 00:18:56.268 ] 00:18:56.268 } 00:18:56.268 ] 00:18:56.268 } 00:18:56.525 [2024-11-20 07:16:20.497377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.525 [2024-11-20 07:16:20.533611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.525 [2024-11-20 07:16:20.565338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:57.898  [2024-11-20T07:16:23.030Z] Copying: 208/512 [MB] (208 MBps) [2024-11-20T07:16:23.288Z] Copying: 439/512 [MB] (230 MBps) [2024-11-20T07:16:23.288Z] Copying: 512/512 [MB] (average 215 MBps) 00:18:59.085 00:18:59.085 07:16:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:18:59.085 07:16:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ bxp8kh91r9ffl5pmdzwqlxo11k888lldb8tfeqhd8nod8rxe00uemoee0bgvdrmme3pcnz2oebkhpnyfgvw616153b6p0y0uk2zkqlg9oqzos39kesqaepy5wjyzrmnzvbd61yn0ezxn63rbbht4xm6bidpmss5sbyqxkbuo61migkd5ha4tsbvos1pcslpz1hbnixx93majd7ndlwb3mx6oxbqr868lii83kkl23jsk3vt60eb2utab7jrb4x8y9389x4ky99apf4mdc9ysf7xee0mw0jktasfi2ong6a4rr4vgxkv5t6pcbe3etce48sobvrsv52azlkxgojpyvgq8c8t0t1pkl3zzlspnpxronkfglf1w4521o53pl233mra1zoik7fqjp9bdcsabbo2fsip418a6pl8xsbqh056cta429aiet49hpblvfyh6i6vp3q1whw11vh32yti2m3lh6dsn3wf4uva61grdhe6advbnjtjcee3la67b8xvxagp5cdrzimf8vil5ht8zbvadisox1kju1glg08vavhzmnk0049lt3kuogejnr7o7ghlfbkgz0wefnafp6p8i6oyix5r8m500ocffuyk79lxb8etl9nstfeonconihm5fklqut06srgfslwpkg1ysth977a0aixreqtl5ijo2qxov5zcub15ptz4dignvv9d37w4wzpxgqcmvz7q3i50j2j81lmtprh7nofobqbpaznxkgrcd8r1obtj4l4gr9v31l7onsdfpa282w53tkcbvjf0m7hwp4yioc9jpmnohmzldangv8l5dmfbue0dt58znkpf7sy0ifji7t6n46dkpw13anwmougcwydsr9vk726r5erqvk871uzi68xol3tln7sg3echk7oy2xhjx56b1gwfrs7bqlkehsv8qh0neantw6bvy1ddhsbcfjvarb3xmwo7wvjjru4m5zro46yaw524bty5ci2wi1eerc7h9xf28alxtff3p2xwuxjntv489 == \b\x\p\8\k\h\9\1\r\9\f\f\l\5\p\m\d\z\w\q\l\x\o\1\1\k\8\8\8\l\l\d\b\8\t\f\e\q\h\d\8\n\o\d\8\r\x\e\0\0\u\e\m\o\e\e\0\b\g\v\d\r\m\m\e\3\p\c\n\z\2\o\e\b\k\h\p\n\y\f\g\v\w\6\1\6\1\5\3\b\6\p\0\y\0\u\k\2\z\k\q\l\g\9\o\q\z\o\s\3\9\k\e\s\q\a\e\p\y\5\w\j\y\z\r\m\n\z\v\b\d\6\1\y\n\0\e\z\x\n\6\3\r\b\b\h\t\4\x\m\6\b\i\d\p\m\s\s\5\s\b\y\q\x\k\b\u\o\6\1\m\i\g\k\d\5\h\a\4\t\s\b\v\o\s\1\p\c\s\l\p\z\1\h\b\n\i\x\x\9\3\m\a\j\d\7\n\d\l\w\b\3\m\x\6\o\x\b\q\r\8\6\8\l\i\i\8\3\k\k\l\2\3\j\s\k\3\v\t\6\0\e\b\2\u\t\a\b\7\j\r\b\4\x\8\y\9\3\8\9\x\4\k\y\9\9\a\p\f\4\m\d\c\9\y\s\f\7\x\e\e\0\m\w\0\j\k\t\a\s\f\i\2\o\n\g\6\a\4\r\r\4\v\g\x\k\v\5\t\6\p\c\b\e\3\e\t\c\e\4\8\s\o\b\v\r\s\v\5\2\a\z\l\k\x\g\o\j\p\y\v\g\q\8\c\8\t\0\t\1\p\k\l\3\z\z\l\s\p\n\p\x\r\o\n\k\f\g\l\f\1\w\4\5\2\1\o\5\3\p\l\2\3\3\m\r\a\1\z\o\i\k\7\f\q\j\p\9\b\d\c\s\a\b\b\o\2\f\s\i\p\4\1\8\a\6\p\l\8\x\s\b\q\h\0\5\6\c\t\a\4\2\9\a\i\e\t\4\9\h\p\b\l\v\f\y\h\6\i\6\v\p\3\q\1\w\h\w\1\1\v\h\3\2\y\t\i\2\m\3\l\h\6\d\s\n\3\w\f\4\u\v\a\6\1\g\r\d\h\e\6\a\d\v\b\n\j\t\j\c\e\e\3\l\a\6\7\b\8\x\v\x\a\g\p\5\c\d\r\z\i\m\f\8\v\i\l\5\h\t\8\z\b\v\a\d\i\s\o\x\1\k\j\u\1\g\l\g\0\8\v\a\v\h\z\m\n\k\0\0\4\9\l\t\3\k\u\o\g\e\j\n\r\7\o\7\g\h\l\f\b\k\g\z\0\w\e\f\n\a\f\p\6\p\8\i\6\o\y\i\x\5\r\8\m\5\0\0\o\c\f\f\u\y\k\7\9\l\x\b\8\e\t\l\9\n\s\t\f\e\o\n\c\o\n\i\h\m\5\f\k\l\q\u\t\0\6\s\r\g\f\s\l\w\p\k\g\1\y\s\t\h\9\7\7\a\0\a\i\x\r\e\q\t\l\5\i\j\o\2\q\x\o\v\5\z\c\u\b\1\5\p\t\z\4\d\i\g\n\v\v\9\d\3\7\w\4\w\z\p\x\g\q\c\m\v\z\7\q\3\i\5\0\j\2\j\8\1\l\m\t\p\r\h\7\n\o\f\o\b\q\b\p\a\z\n\x\k\g\r\c\d\8\r\1\o\b\t\j\4\l\4\g\r\9\v\3\1\l\7\o\n\s\d\f\p\a\2\8\2\w\5\3\t\k\c\b\v\j\f\0\m\7\h\w\p\4\y\i\o\c\9\j\p\m\n\o\h\m\z\l\d\a\n\g\v\8\l\5\d\m\f\b\u\e\0\d\t\5\8\z\n\k\p\f\7\s\y\0\i\f\j\i\7\t\6\n\4\6\d\k\p\w\1\3\a\n\w\m\o\u\g\c\w\y\d\s\r\9\v\k\7\2\6\r\5\e\r\q\v\k\8\7\1\u\z\i\6\8\x\o\l\3\t\l\n\7\s\g\3\e\c\h\k\7\o\y\2\x\h\j\x\5\6\b\1\g\w\f\r\s\7\b\q\l\k\e\h\s\v\8\q\h\0\n\e\a\n\t\w\6\b\v\y\1\d\d\h\s\b\c\f\j\v\a\r\b\3\x\m\w\o\7\w\v\j\j\r\u\4\m\5\z\r\o\4\6\y\a\w\5\2\4\b\t\y\5\c\i\2\w\i\1\e\e\r\c\7\h\9\x\f\2\8\a\l\x\t\f\f\3\p\2\x\w\u\x\j\n\t\v\4\8\9 ]] 00:18:59.085 07:16:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:18:59.085 07:16:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ bxp8kh91r9ffl5pmdzwqlxo11k888lldb8tfeqhd8nod8rxe00uemoee0bgvdrmme3pcnz2oebkhpnyfgvw616153b6p0y0uk2zkqlg9oqzos39kesqaepy5wjyzrmnzvbd61yn0ezxn63rbbht4xm6bidpmss5sbyqxkbuo61migkd5ha4tsbvos1pcslpz1hbnixx93majd7ndlwb3mx6oxbqr868lii83kkl23jsk3vt60eb2utab7jrb4x8y9389x4ky99apf4mdc9ysf7xee0mw0jktasfi2ong6a4rr4vgxkv5t6pcbe3etce48sobvrsv52azlkxgojpyvgq8c8t0t1pkl3zzlspnpxronkfglf1w4521o53pl233mra1zoik7fqjp9bdcsabbo2fsip418a6pl8xsbqh056cta429aiet49hpblvfyh6i6vp3q1whw11vh32yti2m3lh6dsn3wf4uva61grdhe6advbnjtjcee3la67b8xvxagp5cdrzimf8vil5ht8zbvadisox1kju1glg08vavhzmnk0049lt3kuogejnr7o7ghlfbkgz0wefnafp6p8i6oyix5r8m500ocffuyk79lxb8etl9nstfeonconihm5fklqut06srgfslwpkg1ysth977a0aixreqtl5ijo2qxov5zcub15ptz4dignvv9d37w4wzpxgqcmvz7q3i50j2j81lmtprh7nofobqbpaznxkgrcd8r1obtj4l4gr9v31l7onsdfpa282w53tkcbvjf0m7hwp4yioc9jpmnohmzldangv8l5dmfbue0dt58znkpf7sy0ifji7t6n46dkpw13anwmougcwydsr9vk726r5erqvk871uzi68xol3tln7sg3echk7oy2xhjx56b1gwfrs7bqlkehsv8qh0neantw6bvy1ddhsbcfjvarb3xmwo7wvjjru4m5zro46yaw524bty5ci2wi1eerc7h9xf28alxtff3p2xwuxjntv489 == \b\x\p\8\k\h\9\1\r\9\f\f\l\5\p\m\d\z\w\q\l\x\o\1\1\k\8\8\8\l\l\d\b\8\t\f\e\q\h\d\8\n\o\d\8\r\x\e\0\0\u\e\m\o\e\e\0\b\g\v\d\r\m\m\e\3\p\c\n\z\2\o\e\b\k\h\p\n\y\f\g\v\w\6\1\6\1\5\3\b\6\p\0\y\0\u\k\2\z\k\q\l\g\9\o\q\z\o\s\3\9\k\e\s\q\a\e\p\y\5\w\j\y\z\r\m\n\z\v\b\d\6\1\y\n\0\e\z\x\n\6\3\r\b\b\h\t\4\x\m\6\b\i\d\p\m\s\s\5\s\b\y\q\x\k\b\u\o\6\1\m\i\g\k\d\5\h\a\4\t\s\b\v\o\s\1\p\c\s\l\p\z\1\h\b\n\i\x\x\9\3\m\a\j\d\7\n\d\l\w\b\3\m\x\6\o\x\b\q\r\8\6\8\l\i\i\8\3\k\k\l\2\3\j\s\k\3\v\t\6\0\e\b\2\u\t\a\b\7\j\r\b\4\x\8\y\9\3\8\9\x\4\k\y\9\9\a\p\f\4\m\d\c\9\y\s\f\7\x\e\e\0\m\w\0\j\k\t\a\s\f\i\2\o\n\g\6\a\4\r\r\4\v\g\x\k\v\5\t\6\p\c\b\e\3\e\t\c\e\4\8\s\o\b\v\r\s\v\5\2\a\z\l\k\x\g\o\j\p\y\v\g\q\8\c\8\t\0\t\1\p\k\l\3\z\z\l\s\p\n\p\x\r\o\n\k\f\g\l\f\1\w\4\5\2\1\o\5\3\p\l\2\3\3\m\r\a\1\z\o\i\k\7\f\q\j\p\9\b\d\c\s\a\b\b\o\2\f\s\i\p\4\1\8\a\6\p\l\8\x\s\b\q\h\0\5\6\c\t\a\4\2\9\a\i\e\t\4\9\h\p\b\l\v\f\y\h\6\i\6\v\p\3\q\1\w\h\w\1\1\v\h\3\2\y\t\i\2\m\3\l\h\6\d\s\n\3\w\f\4\u\v\a\6\1\g\r\d\h\e\6\a\d\v\b\n\j\t\j\c\e\e\3\l\a\6\7\b\8\x\v\x\a\g\p\5\c\d\r\z\i\m\f\8\v\i\l\5\h\t\8\z\b\v\a\d\i\s\o\x\1\k\j\u\1\g\l\g\0\8\v\a\v\h\z\m\n\k\0\0\4\9\l\t\3\k\u\o\g\e\j\n\r\7\o\7\g\h\l\f\b\k\g\z\0\w\e\f\n\a\f\p\6\p\8\i\6\o\y\i\x\5\r\8\m\5\0\0\o\c\f\f\u\y\k\7\9\l\x\b\8\e\t\l\9\n\s\t\f\e\o\n\c\o\n\i\h\m\5\f\k\l\q\u\t\0\6\s\r\g\f\s\l\w\p\k\g\1\y\s\t\h\9\7\7\a\0\a\i\x\r\e\q\t\l\5\i\j\o\2\q\x\o\v\5\z\c\u\b\1\5\p\t\z\4\d\i\g\n\v\v\9\d\3\7\w\4\w\z\p\x\g\q\c\m\v\z\7\q\3\i\5\0\j\2\j\8\1\l\m\t\p\r\h\7\n\o\f\o\b\q\b\p\a\z\n\x\k\g\r\c\d\8\r\1\o\b\t\j\4\l\4\g\r\9\v\3\1\l\7\o\n\s\d\f\p\a\2\8\2\w\5\3\t\k\c\b\v\j\f\0\m\7\h\w\p\4\y\i\o\c\9\j\p\m\n\o\h\m\z\l\d\a\n\g\v\8\l\5\d\m\f\b\u\e\0\d\t\5\8\z\n\k\p\f\7\s\y\0\i\f\j\i\7\t\6\n\4\6\d\k\p\w\1\3\a\n\w\m\o\u\g\c\w\y\d\s\r\9\v\k\7\2\6\r\5\e\r\q\v\k\8\7\1\u\z\i\6\8\x\o\l\3\t\l\n\7\s\g\3\e\c\h\k\7\o\y\2\x\h\j\x\5\6\b\1\g\w\f\r\s\7\b\q\l\k\e\h\s\v\8\q\h\0\n\e\a\n\t\w\6\b\v\y\1\d\d\h\s\b\c\f\j\v\a\r\b\3\x\m\w\o\7\w\v\j\j\r\u\4\m\5\z\r\o\4\6\y\a\w\5\2\4\b\t\y\5\c\i\2\w\i\1\e\e\r\c\7\h\9\x\f\2\8\a\l\x\t\f\f\3\p\2\x\w\u\x\j\n\t\v\4\8\9 ]] 00:18:59.085 07:16:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:18:59.343 07:16:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:18:59.344 07:16:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:18:59.344 07:16:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:59.344 07:16:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:18:59.344 [2024-11-20 07:16:23.505677] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:59.344 [2024-11-20 07:16:23.505872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60306 ] 00:18:59.344 { 00:18:59.344 "subsystems": [ 00:18:59.344 { 00:18:59.344 "subsystem": "bdev", 00:18:59.344 "config": [ 00:18:59.344 { 00:18:59.344 "params": { 00:18:59.344 "block_size": 512, 00:18:59.344 "num_blocks": 1048576, 00:18:59.344 "name": "malloc0" 00:18:59.344 }, 00:18:59.344 "method": "bdev_malloc_create" 00:18:59.344 }, 00:18:59.344 { 00:18:59.344 "params": { 00:18:59.344 "filename": "/dev/zram1", 00:18:59.344 "name": "uring0" 00:18:59.344 }, 00:18:59.344 "method": "bdev_uring_create" 00:18:59.344 }, 00:18:59.344 { 00:18:59.344 "method": "bdev_wait_for_examine" 00:18:59.344 } 00:18:59.344 ] 00:18:59.344 } 00:18:59.344 ] 00:18:59.344 } 00:18:59.601 [2024-11-20 07:16:23.637358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.602 [2024-11-20 07:16:23.670169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.602 [2024-11-20 07:16:23.700316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:00.972  [2024-11-20T07:16:26.112Z] Copying: 233/512 [MB] (233 MBps) [2024-11-20T07:16:26.112Z] Copying: 467/512 [MB] (234 MBps) [2024-11-20T07:16:26.450Z] Copying: 512/512 [MB] (average 233 MBps) 00:19:02.247 00:19:02.247 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:19:02.247 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:19:02.247 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:19:02.247 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:19:02.247 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:19:02.247 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:19:02.247 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:19:02.247 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:19:02.247 [2024-11-20 07:16:26.234316] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:02.247 [2024-11-20 07:16:26.234861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60346 ] 00:19:02.247 { 00:19:02.247 "subsystems": [ 00:19:02.247 { 00:19:02.247 "subsystem": "bdev", 00:19:02.247 "config": [ 00:19:02.247 { 00:19:02.247 "params": { 00:19:02.247 "block_size": 512, 00:19:02.247 "num_blocks": 1048576, 00:19:02.247 "name": "malloc0" 00:19:02.247 }, 00:19:02.247 "method": "bdev_malloc_create" 00:19:02.247 }, 00:19:02.247 { 00:19:02.247 "params": { 00:19:02.247 "filename": "/dev/zram1", 00:19:02.247 "name": "uring0" 00:19:02.247 }, 00:19:02.247 "method": "bdev_uring_create" 00:19:02.247 }, 00:19:02.247 { 00:19:02.247 "params": { 00:19:02.247 "name": "uring0" 00:19:02.247 }, 00:19:02.247 "method": "bdev_uring_delete" 00:19:02.247 }, 00:19:02.247 { 00:19:02.247 "method": "bdev_wait_for_examine" 00:19:02.247 } 00:19:02.247 ] 00:19:02.247 } 00:19:02.247 ] 00:19:02.247 } 00:19:02.247 [2024-11-20 07:16:26.369471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.247 [2024-11-20 07:16:26.401316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.506 [2024-11-20 07:16:26.430696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:02.506  [2024-11-20T07:16:26.968Z] Copying: 0/0 [B] (average 0 Bps) 00:19:02.765 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:02.765 07:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:19:02.765 [2024-11-20 07:16:26.790875] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:02.765 [2024-11-20 07:16:26.791040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60369 ] 00:19:02.765 { 00:19:02.765 "subsystems": [ 00:19:02.765 { 00:19:02.765 "subsystem": "bdev", 00:19:02.765 "config": [ 00:19:02.765 { 00:19:02.765 "params": { 00:19:02.765 "block_size": 512, 00:19:02.765 "num_blocks": 1048576, 00:19:02.765 "name": "malloc0" 00:19:02.765 }, 00:19:02.765 "method": "bdev_malloc_create" 00:19:02.765 }, 00:19:02.765 { 00:19:02.765 "params": { 00:19:02.765 "filename": "/dev/zram1", 00:19:02.765 "name": "uring0" 00:19:02.765 }, 00:19:02.765 "method": "bdev_uring_create" 00:19:02.765 }, 00:19:02.765 { 00:19:02.765 "params": { 00:19:02.765 "name": "uring0" 00:19:02.765 }, 00:19:02.765 "method": "bdev_uring_delete" 00:19:02.765 }, 00:19:02.765 { 00:19:02.765 "method": "bdev_wait_for_examine" 00:19:02.765 } 00:19:02.765 ] 00:19:02.765 } 00:19:02.765 ] 00:19:02.765 } 00:19:02.765 [2024-11-20 07:16:26.927418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.765 [2024-11-20 07:16:26.959934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.022 [2024-11-20 07:16:26.989481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:03.023 [2024-11-20 07:16:27.119496] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:19:03.023 [2024-11-20 07:16:27.119685] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:19:03.023 [2024-11-20 07:16:27.119694] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:19:03.023 [2024-11-20 07:16:27.119699] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:03.280 [2024-11-20 07:16:27.265173] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:03.280 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:19:03.280 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.280 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:19:03.280 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:19:03.280 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:19:03.281 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.281 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:19:03.281 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:19:03.281 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:19:03.281 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:19:03.281 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:19:03.281 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:19:03.281 00:19:03.281 real 0m10.515s 00:19:03.281 user 0m7.312s 00:19:03.281 sys 0m8.868s 00:19:03.281 ************************************ 00:19:03.281 END TEST dd_uring_copy 00:19:03.281 ************************************ 00:19:03.281 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.281 07:16:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:19:03.539 00:19:03.539 real 0m10.705s 00:19:03.539 user 0m7.418s 00:19:03.539 sys 0m8.957s 00:19:03.539 07:16:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.539 07:16:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:19:03.539 ************************************ 00:19:03.539 END TEST spdk_dd_uring 00:19:03.539 ************************************ 00:19:03.539 07:16:27 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:19:03.539 07:16:27 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:03.539 07:16:27 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.539 07:16:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:19:03.539 ************************************ 00:19:03.539 START TEST spdk_dd_sparse 00:19:03.539 ************************************ 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:19:03.539 * Looking for test storage... 00:19:03.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:03.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.539 --rc genhtml_branch_coverage=1 00:19:03.539 --rc genhtml_function_coverage=1 00:19:03.539 --rc genhtml_legend=1 00:19:03.539 --rc geninfo_all_blocks=1 00:19:03.539 --rc geninfo_unexecuted_blocks=1 00:19:03.539 00:19:03.539 ' 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:03.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.539 --rc genhtml_branch_coverage=1 00:19:03.539 --rc genhtml_function_coverage=1 00:19:03.539 --rc genhtml_legend=1 00:19:03.539 --rc geninfo_all_blocks=1 00:19:03.539 --rc geninfo_unexecuted_blocks=1 00:19:03.539 00:19:03.539 ' 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:03.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.539 --rc genhtml_branch_coverage=1 00:19:03.539 --rc genhtml_function_coverage=1 00:19:03.539 --rc genhtml_legend=1 00:19:03.539 --rc geninfo_all_blocks=1 00:19:03.539 --rc geninfo_unexecuted_blocks=1 00:19:03.539 00:19:03.539 ' 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:03.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.539 --rc genhtml_branch_coverage=1 00:19:03.539 --rc genhtml_function_coverage=1 00:19:03.539 --rc genhtml_legend=1 00:19:03.539 --rc geninfo_all_blocks=1 00:19:03.539 --rc geninfo_unexecuted_blocks=1 00:19:03.539 00:19:03.539 ' 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:19:03.539 1+0 records in 00:19:03.539 1+0 records out 00:19:03.539 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00502286 s, 835 MB/s 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:19:03.539 1+0 records in 00:19:03.539 1+0 records out 00:19:03.539 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00452712 s, 926 MB/s 00:19:03.539 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:19:03.540 1+0 records in 00:19:03.540 1+0 records out 00:19:03.540 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00511173 s, 821 MB/s 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:19:03.540 ************************************ 00:19:03.540 START TEST dd_sparse_file_to_file 00:19:03.540 ************************************ 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:19:03.540 07:16:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:19:03.798 [2024-11-20 07:16:27.761524] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:03.798 [2024-11-20 07:16:27.761715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60463 ] 00:19:03.798 { 00:19:03.798 "subsystems": [ 00:19:03.798 { 00:19:03.798 "subsystem": "bdev", 00:19:03.798 "config": [ 00:19:03.798 { 00:19:03.798 "params": { 00:19:03.798 "block_size": 4096, 00:19:03.798 "filename": "dd_sparse_aio_disk", 00:19:03.798 "name": "dd_aio" 00:19:03.798 }, 00:19:03.798 "method": "bdev_aio_create" 00:19:03.798 }, 00:19:03.798 { 00:19:03.798 "params": { 00:19:03.798 "lvs_name": "dd_lvstore", 00:19:03.798 "bdev_name": "dd_aio" 00:19:03.798 }, 00:19:03.798 "method": "bdev_lvol_create_lvstore" 00:19:03.798 }, 00:19:03.798 { 00:19:03.798 "method": "bdev_wait_for_examine" 00:19:03.798 } 00:19:03.798 ] 00:19:03.798 } 00:19:03.798 ] 00:19:03.798 } 00:19:03.798 [2024-11-20 07:16:27.898916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.798 [2024-11-20 07:16:27.934118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.798 [2024-11-20 07:16:27.964556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:04.056  [2024-11-20T07:16:28.259Z] Copying: 12/36 [MB] (average 1333 MBps) 00:19:04.056 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:19:04.056 ************************************ 00:19:04.056 END TEST dd_sparse_file_to_file 00:19:04.056 ************************************ 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:19:04.056 00:19:04.056 real 0m0.454s 00:19:04.056 user 0m0.246s 00:19:04.056 sys 0m0.206s 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:19:04.056 ************************************ 00:19:04.056 START TEST dd_sparse_file_to_bdev 00:19:04.056 ************************************ 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:19:04.056 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:19:04.315 [2024-11-20 07:16:28.258313] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:04.315 [2024-11-20 07:16:28.258506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60506 ] 00:19:04.315 { 00:19:04.315 "subsystems": [ 00:19:04.315 { 00:19:04.315 "subsystem": "bdev", 00:19:04.315 "config": [ 00:19:04.315 { 00:19:04.315 "params": { 00:19:04.315 "block_size": 4096, 00:19:04.315 "filename": "dd_sparse_aio_disk", 00:19:04.315 "name": "dd_aio" 00:19:04.315 }, 00:19:04.315 "method": "bdev_aio_create" 00:19:04.315 }, 00:19:04.315 { 00:19:04.315 "params": { 00:19:04.315 "lvs_name": "dd_lvstore", 00:19:04.315 "lvol_name": "dd_lvol", 00:19:04.315 "size_in_mib": 36, 00:19:04.315 "thin_provision": true 00:19:04.315 }, 00:19:04.315 "method": "bdev_lvol_create" 00:19:04.315 }, 00:19:04.315 { 00:19:04.315 "method": "bdev_wait_for_examine" 00:19:04.315 } 00:19:04.315 ] 00:19:04.315 } 00:19:04.315 ] 00:19:04.315 } 00:19:04.315 [2024-11-20 07:16:28.398043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.315 [2024-11-20 07:16:28.434472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.315 [2024-11-20 07:16:28.466364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:04.574  [2024-11-20T07:16:28.777Z] Copying: 12/36 [MB] (average 600 MBps) 00:19:04.574 00:19:04.574 00:19:04.574 real 0m0.454s 00:19:04.574 user 0m0.264s 00:19:04.574 sys 0m0.211s 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:19:04.574 ************************************ 00:19:04.574 END TEST dd_sparse_file_to_bdev 00:19:04.574 ************************************ 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:19:04.574 ************************************ 00:19:04.574 START TEST dd_sparse_bdev_to_file 00:19:04.574 ************************************ 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:19:04.574 07:16:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:19:04.574 [2024-11-20 07:16:28.751216] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:04.574 [2024-11-20 07:16:28.751338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60538 ] 00:19:04.574 { 00:19:04.574 "subsystems": [ 00:19:04.574 { 00:19:04.574 "subsystem": "bdev", 00:19:04.574 "config": [ 00:19:04.574 { 00:19:04.574 "params": { 00:19:04.574 "block_size": 4096, 00:19:04.574 "filename": "dd_sparse_aio_disk", 00:19:04.574 "name": "dd_aio" 00:19:04.574 }, 00:19:04.574 "method": "bdev_aio_create" 00:19:04.574 }, 00:19:04.574 { 00:19:04.574 "method": "bdev_wait_for_examine" 00:19:04.574 } 00:19:04.574 ] 00:19:04.574 } 00:19:04.574 ] 00:19:04.574 } 00:19:04.832 [2024-11-20 07:16:28.892828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.832 [2024-11-20 07:16:28.928790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.832 [2024-11-20 07:16:28.959880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:04.832  [2024-11-20T07:16:29.293Z] Copying: 12/36 [MB] (average 1090 MBps) 00:19:05.090 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:19:05.090 00:19:05.090 real 0m0.452s 00:19:05.090 user 0m0.252s 00:19:05.090 sys 0m0.219s 00:19:05.090 ************************************ 00:19:05.090 END TEST dd_sparse_bdev_to_file 00:19:05.090 ************************************ 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:19:05.090 ************************************ 00:19:05.090 END TEST spdk_dd_sparse 00:19:05.090 ************************************ 00:19:05.090 00:19:05.090 real 0m1.675s 00:19:05.090 user 0m0.886s 00:19:05.090 sys 0m0.829s 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.090 07:16:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:19:05.090 07:16:29 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:19:05.090 07:16:29 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.090 07:16:29 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.090 07:16:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:19:05.090 ************************************ 00:19:05.090 START TEST spdk_dd_negative 00:19:05.090 ************************************ 00:19:05.090 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:19:05.350 * Looking for test storage... 00:19:05.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:05.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.350 --rc genhtml_branch_coverage=1 00:19:05.350 --rc genhtml_function_coverage=1 00:19:05.350 --rc genhtml_legend=1 00:19:05.350 --rc geninfo_all_blocks=1 00:19:05.350 --rc geninfo_unexecuted_blocks=1 00:19:05.350 00:19:05.350 ' 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:05.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.350 --rc genhtml_branch_coverage=1 00:19:05.350 --rc genhtml_function_coverage=1 00:19:05.350 --rc genhtml_legend=1 00:19:05.350 --rc geninfo_all_blocks=1 00:19:05.350 --rc geninfo_unexecuted_blocks=1 00:19:05.350 00:19:05.350 ' 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:05.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.350 --rc genhtml_branch_coverage=1 00:19:05.350 --rc genhtml_function_coverage=1 00:19:05.350 --rc genhtml_legend=1 00:19:05.350 --rc geninfo_all_blocks=1 00:19:05.350 --rc geninfo_unexecuted_blocks=1 00:19:05.350 00:19:05.350 ' 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:05.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.350 --rc genhtml_branch_coverage=1 00:19:05.350 --rc genhtml_function_coverage=1 00:19:05.350 --rc genhtml_legend=1 00:19:05.350 --rc geninfo_all_blocks=1 00:19:05.350 --rc geninfo_unexecuted_blocks=1 00:19:05.350 00:19:05.350 ' 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.350 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:05.350 ************************************ 00:19:05.351 START TEST dd_invalid_arguments 00:19:05.351 ************************************ 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:05.351 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:19:05.351 00:19:05.351 CPU options: 00:19:05.351 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:19:05.351 (like [0,1,10]) 00:19:05.351 --lcores lcore to CPU mapping list. The list is in the format: 00:19:05.351 [<,lcores[@CPUs]>...] 00:19:05.351 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:19:05.351 Within the group, '-' is used for range separator, 00:19:05.351 ',' is used for single number separator. 00:19:05.351 '( )' can be omitted for single element group, 00:19:05.351 '@' can be omitted if cpus and lcores have the same value 00:19:05.351 --disable-cpumask-locks Disable CPU core lock files. 00:19:05.351 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:19:05.351 pollers in the app support interrupt mode) 00:19:05.351 -p, --main-core main (primary) core for DPDK 00:19:05.351 00:19:05.351 Configuration options: 00:19:05.351 -c, --config, --json JSON config file 00:19:05.351 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:19:05.351 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:19:05.351 --wait-for-rpc wait for RPCs to initialize subsystems 00:19:05.351 --rpcs-allowed comma-separated list of permitted RPCS 00:19:05.351 --json-ignore-init-errors don't exit on invalid config entry 00:19:05.351 00:19:05.351 Memory options: 00:19:05.351 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:19:05.351 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:19:05.351 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:19:05.351 -R, --huge-unlink unlink huge files after initialization 00:19:05.351 -n, --mem-channels number of memory channels used for DPDK 00:19:05.351 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:19:05.351 --msg-mempool-size global message memory pool size in count (default: 262143) 00:19:05.351 --no-huge run without using hugepages 00:19:05.351 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:19:05.351 -i, --shm-id shared memory ID (optional) 00:19:05.351 -g, --single-file-segments force creating just one hugetlbfs file 00:19:05.351 00:19:05.351 PCI options: 00:19:05.351 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:19:05.351 -B, --pci-blocked pci addr to block (can be used more than once) 00:19:05.351 -u, --no-pci disable PCI access 00:19:05.351 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:19:05.351 00:19:05.351 Log options: 00:19:05.351 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:19:05.351 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:19:05.351 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:19:05.351 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:19:05.351 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:19:05.351 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:19:05.351 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:19:05.351 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:19:05.351 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:19:05.351 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:19:05.351 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:19:05.351 --silence-noticelog disable notice level logging to stderr 00:19:05.351 00:19:05.351 Trace options: 00:19:05.351 --num-trace-entries number of trace entries for each core, must be power of 2, 00:19:05.351 setting 0 to disable trace (default 32768) 00:19:05.351 Tracepoints vary in size and can use more than one trace entry. 00:19:05.351 -e, --tpoint-group [:] 00:19:05.351 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:19:05.351 [2024-11-20 07:16:29.455653] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:19:05.351 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:19:05.351 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:19:05.351 bdev_raid, scheduler, all). 00:19:05.351 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:19:05.351 a tracepoint group. First tpoint inside a group can be enabled by 00:19:05.351 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:19:05.351 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:19:05.351 in /include/spdk_internal/trace_defs.h 00:19:05.351 00:19:05.351 Other options: 00:19:05.351 -h, --help show this usage 00:19:05.351 -v, --version print SPDK version 00:19:05.351 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:19:05.351 --env-context Opaque context for use of the env implementation 00:19:05.351 00:19:05.351 Application specific: 00:19:05.351 [--------- DD Options ---------] 00:19:05.351 --if Input file. Must specify either --if or --ib. 00:19:05.351 --ib Input bdev. Must specifier either --if or --ib 00:19:05.351 --of Output file. Must specify either --of or --ob. 00:19:05.351 --ob Output bdev. Must specify either --of or --ob. 00:19:05.351 --iflag Input file flags. 00:19:05.351 --oflag Output file flags. 00:19:05.351 --bs I/O unit size (default: 4096) 00:19:05.351 --qd Queue depth (default: 2) 00:19:05.351 --count I/O unit count. The number of I/O units to copy. (default: all) 00:19:05.351 --skip Skip this many I/O units at start of input. (default: 0) 00:19:05.351 --seek Skip this many I/O units at start of output. (default: 0) 00:19:05.351 --aio Force usage of AIO. (by default io_uring is used if available) 00:19:05.351 --sparse Enable hole skipping in input target 00:19:05.351 Available iflag and oflag values: 00:19:05.351 append - append mode 00:19:05.351 direct - use direct I/O for data 00:19:05.351 directory - fail unless a directory 00:19:05.351 dsync - use synchronized I/O for data 00:19:05.351 noatime - do not update access time 00:19:05.351 noctty - do not assign controlling terminal from file 00:19:05.351 nofollow - do not follow symlinks 00:19:05.351 nonblock - use non-blocking I/O 00:19:05.351 sync - use synchronized I/O for data and metadata 00:19:05.351 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.352 00:19:05.352 real 0m0.059s 00:19:05.352 user 0m0.039s 00:19:05.352 sys 0m0.019s 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:19:05.352 ************************************ 00:19:05.352 END TEST dd_invalid_arguments 00:19:05.352 ************************************ 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:05.352 ************************************ 00:19:05.352 START TEST dd_double_input 00:19:05.352 ************************************ 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:05.352 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:05.352 [2024-11-20 07:16:29.542811] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.611 ************************************ 00:19:05.611 END TEST dd_double_input 00:19:05.611 ************************************ 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.611 00:19:05.611 real 0m0.048s 00:19:05.611 user 0m0.030s 00:19:05.611 sys 0m0.016s 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:05.611 ************************************ 00:19:05.611 START TEST dd_double_output 00:19:05.611 ************************************ 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:05.611 [2024-11-20 07:16:29.628490] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.611 00:19:05.611 real 0m0.049s 00:19:05.611 user 0m0.028s 00:19:05.611 sys 0m0.021s 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.611 ************************************ 00:19:05.611 END TEST dd_double_output 00:19:05.611 ************************************ 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:05.611 ************************************ 00:19:05.611 START TEST dd_no_input 00:19:05.611 ************************************ 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:05.611 [2024-11-20 07:16:29.711988] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.611 00:19:05.611 real 0m0.048s 00:19:05.611 user 0m0.030s 00:19:05.611 sys 0m0.017s 00:19:05.611 ************************************ 00:19:05.611 END TEST dd_no_input 00:19:05.611 ************************************ 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:05.611 ************************************ 00:19:05.611 START TEST dd_no_output 00:19:05.611 ************************************ 00:19:05.611 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:05.612 [2024-11-20 07:16:29.798569] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.612 ************************************ 00:19:05.612 END TEST dd_no_output 00:19:05.612 ************************************ 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.612 00:19:05.612 real 0m0.048s 00:19:05.612 user 0m0.026s 00:19:05.612 sys 0m0.021s 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.612 07:16:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:05.871 ************************************ 00:19:05.871 START TEST dd_wrong_blocksize 00:19:05.871 ************************************ 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:19:05.871 [2024-11-20 07:16:29.884578] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.871 00:19:05.871 real 0m0.049s 00:19:05.871 user 0m0.032s 00:19:05.871 sys 0m0.016s 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:19:05.871 ************************************ 00:19:05.871 END TEST dd_wrong_blocksize 00:19:05.871 ************************************ 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:05.871 ************************************ 00:19:05.871 START TEST dd_smaller_blocksize 00:19:05.871 ************************************ 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:05.871 07:16:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:19:05.871 [2024-11-20 07:16:29.969944] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:05.871 [2024-11-20 07:16:29.970009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60765 ] 00:19:06.131 [2024-11-20 07:16:30.109903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.131 [2024-11-20 07:16:30.145562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.131 [2024-11-20 07:16:30.176444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:06.389 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:19:06.389 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:19:06.648 [2024-11-20 07:16:30.596781] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:19:06.648 [2024-11-20 07:16:30.596836] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:06.648 [2024-11-20 07:16:30.655209] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:06.648 07:16:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:19:06.648 07:16:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.648 07:16:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:19:06.648 07:16:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:19:06.648 07:16:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:19:06.648 07:16:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.648 00:19:06.648 real 0m0.767s 00:19:06.648 user 0m0.219s 00:19:06.648 sys 0m0.442s 00:19:06.648 07:16:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.648 ************************************ 00:19:06.648 END TEST dd_smaller_blocksize 00:19:06.648 ************************************ 00:19:06.648 07:16:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:06.649 ************************************ 00:19:06.649 START TEST dd_invalid_count 00:19:06.649 ************************************ 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:19:06.649 [2024-11-20 07:16:30.773883] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.649 00:19:06.649 real 0m0.048s 00:19:06.649 user 0m0.029s 00:19:06.649 sys 0m0.019s 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.649 ************************************ 00:19:06.649 END TEST dd_invalid_count 00:19:06.649 ************************************ 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:06.649 ************************************ 00:19:06.649 START TEST dd_invalid_oflag 00:19:06.649 ************************************ 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:06.649 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:19:06.908 [2024-11-20 07:16:30.864306] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.908 00:19:06.908 real 0m0.050s 00:19:06.908 user 0m0.028s 00:19:06.908 sys 0m0.021s 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.908 ************************************ 00:19:06.908 END TEST dd_invalid_oflag 00:19:06.908 ************************************ 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:06.908 ************************************ 00:19:06.908 START TEST dd_invalid_iflag 00:19:06.908 ************************************ 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:19:06.908 [2024-11-20 07:16:30.950022] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.908 00:19:06.908 real 0m0.047s 00:19:06.908 user 0m0.033s 00:19:06.908 sys 0m0.013s 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.908 ************************************ 00:19:06.908 END TEST dd_invalid_iflag 00:19:06.908 ************************************ 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:06.908 ************************************ 00:19:06.908 START TEST dd_unknown_flag 00:19:06.908 ************************************ 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.908 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.908 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.908 07:16:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.908 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.908 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:06.908 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:19:06.908 [2024-11-20 07:16:31.034164] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:06.908 [2024-11-20 07:16:31.034244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60857 ] 00:19:07.166 [2024-11-20 07:16:31.165269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.166 [2024-11-20 07:16:31.201171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.166 [2024-11-20 07:16:31.232107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:07.166 [2024-11-20 07:16:31.255772] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:19:07.166 [2024-11-20 07:16:31.255814] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:07.166 [2024-11-20 07:16:31.255850] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:19:07.166 [2024-11-20 07:16:31.255858] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:07.166 [2024-11-20 07:16:31.256005] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:19:07.166 [2024-11-20 07:16:31.256013] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:07.166 [2024-11-20 07:16:31.256045] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:19:07.166 [2024-11-20 07:16:31.256050] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:19:07.166 [2024-11-20 07:16:31.313686] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:07.167 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:19:07.167 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:07.167 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:19:07.167 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:19:07.167 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:19:07.167 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:07.167 00:19:07.167 real 0m0.359s 00:19:07.167 user 0m0.174s 00:19:07.167 sys 0m0.095s 00:19:07.167 ************************************ 00:19:07.167 END TEST dd_unknown_flag 00:19:07.167 ************************************ 00:19:07.167 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.167 07:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:07.425 ************************************ 00:19:07.425 START TEST dd_invalid_json 00:19:07.425 ************************************ 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:07.425 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:19:07.425 [2024-11-20 07:16:31.440951] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:07.425 [2024-11-20 07:16:31.441018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60880 ] 00:19:07.425 [2024-11-20 07:16:31.578462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.425 [2024-11-20 07:16:31.615033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.425 [2024-11-20 07:16:31.615089] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:19:07.425 [2024-11-20 07:16:31.615101] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:07.425 [2024-11-20 07:16:31.615107] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:07.425 [2024-11-20 07:16:31.615133] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:07.683 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:19:07.683 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:07.683 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:19:07.683 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:19:07.683 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:19:07.683 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:07.683 00:19:07.683 real 0m0.259s 00:19:07.683 user 0m0.110s 00:19:07.683 sys 0m0.048s 00:19:07.683 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:19:07.684 ************************************ 00:19:07.684 END TEST dd_invalid_json 00:19:07.684 ************************************ 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:07.684 ************************************ 00:19:07.684 START TEST dd_invalid_seek 00:19:07.684 ************************************ 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:07.684 07:16:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:19:07.684 { 00:19:07.684 "subsystems": [ 00:19:07.684 { 00:19:07.684 "subsystem": "bdev", 00:19:07.684 "config": [ 00:19:07.684 { 00:19:07.684 "params": { 00:19:07.684 "block_size": 512, 00:19:07.684 "num_blocks": 512, 00:19:07.684 "name": "malloc0" 00:19:07.684 }, 00:19:07.684 "method": "bdev_malloc_create" 00:19:07.684 }, 00:19:07.684 { 00:19:07.684 "params": { 00:19:07.684 "block_size": 512, 00:19:07.684 "num_blocks": 512, 00:19:07.684 "name": "malloc1" 00:19:07.684 }, 00:19:07.684 "method": "bdev_malloc_create" 00:19:07.684 }, 00:19:07.684 { 00:19:07.684 "method": "bdev_wait_for_examine" 00:19:07.684 } 00:19:07.684 ] 00:19:07.684 } 00:19:07.684 ] 00:19:07.684 } 00:19:07.684 [2024-11-20 07:16:31.738391] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:07.684 [2024-11-20 07:16:31.738576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60909 ] 00:19:07.684 [2024-11-20 07:16:31.877653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.941 [2024-11-20 07:16:31.913512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.941 [2024-11-20 07:16:31.945602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:07.941 [2024-11-20 07:16:31.994615] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:19:07.941 [2024-11-20 07:16:31.994666] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:07.941 [2024-11-20 07:16:32.054876] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:07.941 ************************************ 00:19:07.941 END TEST dd_invalid_seek 00:19:07.941 ************************************ 00:19:07.941 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:19:07.941 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:07.941 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:19:07.941 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:19:07.941 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:19:07.941 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:07.941 00:19:07.942 real 0m0.405s 00:19:07.942 user 0m0.247s 00:19:07.942 sys 0m0.094s 00:19:07.942 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.942 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:19:07.942 07:16:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:19:07.942 07:16:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:07.942 07:16:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.942 07:16:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:07.942 ************************************ 00:19:07.942 START TEST dd_invalid_skip 00:19:07.942 ************************************ 00:19:07.942 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:08.199 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:19:08.199 { 00:19:08.199 "subsystems": [ 00:19:08.199 { 00:19:08.199 "subsystem": "bdev", 00:19:08.199 "config": [ 00:19:08.199 { 00:19:08.199 "params": { 00:19:08.199 "block_size": 512, 00:19:08.199 "num_blocks": 512, 00:19:08.199 "name": "malloc0" 00:19:08.199 }, 00:19:08.199 "method": "bdev_malloc_create" 00:19:08.199 }, 00:19:08.199 { 00:19:08.199 "params": { 00:19:08.199 "block_size": 512, 00:19:08.199 "num_blocks": 512, 00:19:08.199 "name": "malloc1" 00:19:08.199 }, 00:19:08.199 "method": "bdev_malloc_create" 00:19:08.199 }, 00:19:08.199 { 00:19:08.199 "method": "bdev_wait_for_examine" 00:19:08.199 } 00:19:08.199 ] 00:19:08.199 } 00:19:08.199 ] 00:19:08.199 } 00:19:08.199 [2024-11-20 07:16:32.182160] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:08.199 [2024-11-20 07:16:32.182234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60943 ] 00:19:08.199 [2024-11-20 07:16:32.321291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.199 [2024-11-20 07:16:32.357517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.199 [2024-11-20 07:16:32.389099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:08.457 [2024-11-20 07:16:32.438734] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:19:08.457 [2024-11-20 07:16:32.438929] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:08.458 [2024-11-20 07:16:32.498375] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:19:08.458 ************************************ 00:19:08.458 END TEST dd_invalid_skip 00:19:08.458 ************************************ 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:08.458 00:19:08.458 real 0m0.400s 00:19:08.458 user 0m0.232s 00:19:08.458 sys 0m0.104s 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:08.458 ************************************ 00:19:08.458 START TEST dd_invalid_input_count 00:19:08.458 ************************************ 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:08.458 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:19:08.458 { 00:19:08.458 "subsystems": [ 00:19:08.458 { 00:19:08.458 "subsystem": "bdev", 00:19:08.458 "config": [ 00:19:08.458 { 00:19:08.458 "params": { 00:19:08.458 "block_size": 512, 00:19:08.458 "num_blocks": 512, 00:19:08.458 "name": "malloc0" 00:19:08.458 }, 00:19:08.458 "method": "bdev_malloc_create" 00:19:08.458 }, 00:19:08.458 { 00:19:08.458 "params": { 00:19:08.458 "block_size": 512, 00:19:08.458 "num_blocks": 512, 00:19:08.458 "name": "malloc1" 00:19:08.458 }, 00:19:08.458 "method": "bdev_malloc_create" 00:19:08.458 }, 00:19:08.458 { 00:19:08.458 "method": "bdev_wait_for_examine" 00:19:08.458 } 00:19:08.458 ] 00:19:08.458 } 00:19:08.458 ] 00:19:08.458 } 00:19:08.458 [2024-11-20 07:16:32.621917] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:08.458 [2024-11-20 07:16:32.621982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60971 ] 00:19:08.716 [2024-11-20 07:16:32.761875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.716 [2024-11-20 07:16:32.797972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.716 [2024-11-20 07:16:32.829544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:08.716 [2024-11-20 07:16:32.878769] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:19:08.716 [2024-11-20 07:16:32.878817] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:08.974 [2024-11-20 07:16:32.938713] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:08.974 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:19:08.974 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:08.974 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:19:08.974 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:19:08.974 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:19:08.974 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:08.974 00:19:08.974 real 0m0.399s 00:19:08.974 user 0m0.243s 00:19:08.974 sys 0m0.092s 00:19:08.974 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.974 ************************************ 00:19:08.974 END TEST dd_invalid_input_count 00:19:08.974 ************************************ 00:19:08.974 07:16:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:08.974 ************************************ 00:19:08.974 START TEST dd_invalid_output_count 00:19:08.974 ************************************ 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:08.974 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:19:08.974 { 00:19:08.974 "subsystems": [ 00:19:08.974 { 00:19:08.974 "subsystem": "bdev", 00:19:08.974 "config": [ 00:19:08.974 { 00:19:08.974 "params": { 00:19:08.974 "block_size": 512, 00:19:08.974 "num_blocks": 512, 00:19:08.974 "name": "malloc0" 00:19:08.974 }, 00:19:08.974 "method": "bdev_malloc_create" 00:19:08.974 }, 00:19:08.974 { 00:19:08.974 "method": "bdev_wait_for_examine" 00:19:08.974 } 00:19:08.974 ] 00:19:08.974 } 00:19:08.974 ] 00:19:08.974 } 00:19:08.974 [2024-11-20 07:16:33.059700] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:08.974 [2024-11-20 07:16:33.059762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61010 ] 00:19:09.232 [2024-11-20 07:16:33.200329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.232 [2024-11-20 07:16:33.237385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.232 [2024-11-20 07:16:33.269126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:09.232 [2024-11-20 07:16:33.309797] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:19:09.232 [2024-11-20 07:16:33.309842] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:09.232 [2024-11-20 07:16:33.368480] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:09.232 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:19:09.232 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.232 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:19:09.232 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:19:09.232 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:19:09.232 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.232 00:19:09.232 real 0m0.392s 00:19:09.232 user 0m0.232s 00:19:09.232 sys 0m0.090s 00:19:09.232 ************************************ 00:19:09.232 END TEST dd_invalid_output_count 00:19:09.232 ************************************ 00:19:09.232 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.232 07:16:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:19:09.489 07:16:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:19:09.489 07:16:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:09.490 ************************************ 00:19:09.490 START TEST dd_bs_not_multiple 00:19:09.490 ************************************ 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:09.490 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:19:09.490 [2024-11-20 07:16:33.487140] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:09.490 [2024-11-20 07:16:33.487200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61035 ] 00:19:09.490 { 00:19:09.490 "subsystems": [ 00:19:09.490 { 00:19:09.490 "subsystem": "bdev", 00:19:09.490 "config": [ 00:19:09.490 { 00:19:09.490 "params": { 00:19:09.490 "block_size": 512, 00:19:09.490 "num_blocks": 512, 00:19:09.490 "name": "malloc0" 00:19:09.490 }, 00:19:09.490 "method": "bdev_malloc_create" 00:19:09.490 }, 00:19:09.490 { 00:19:09.490 "params": { 00:19:09.490 "block_size": 512, 00:19:09.490 "num_blocks": 512, 00:19:09.490 "name": "malloc1" 00:19:09.490 }, 00:19:09.490 "method": "bdev_malloc_create" 00:19:09.490 }, 00:19:09.490 { 00:19:09.490 "method": "bdev_wait_for_examine" 00:19:09.490 } 00:19:09.490 ] 00:19:09.490 } 00:19:09.490 ] 00:19:09.490 } 00:19:09.490 [2024-11-20 07:16:33.627789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.490 [2024-11-20 07:16:33.663896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.749 [2024-11-20 07:16:33.695617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:09.749 [2024-11-20 07:16:33.750386] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:19:09.749 [2024-11-20 07:16:33.750599] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:09.749 [2024-11-20 07:16:33.819546] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:09.749 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:19:09.749 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.749 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:19:09.749 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:19:09.749 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:19:09.749 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.749 00:19:09.749 real 0m0.415s 00:19:09.749 user 0m0.255s 00:19:09.749 sys 0m0.096s 00:19:09.749 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.749 07:16:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:19:09.749 ************************************ 00:19:09.749 END TEST dd_bs_not_multiple 00:19:09.749 ************************************ 00:19:09.749 ************************************ 00:19:09.749 END TEST spdk_dd_negative 00:19:09.749 ************************************ 00:19:09.749 00:19:09.749 real 0m4.640s 00:19:09.749 user 0m2.316s 00:19:09.749 sys 0m1.682s 00:19:09.749 07:16:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.749 07:16:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:19:09.749 ************************************ 00:19:09.749 END TEST spdk_dd 00:19:09.749 ************************************ 00:19:09.749 00:19:09.749 real 0m56.294s 00:19:09.749 user 0m34.852s 00:19:09.749 sys 0m22.291s 00:19:09.749 07:16:33 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.749 07:16:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:19:10.008 07:16:33 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:10.008 07:16:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:10.008 07:16:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:10.008 07:16:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.009 07:16:33 -- common/autotest_common.sh@10 -- # set +x 00:19:10.009 07:16:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:10.009 07:16:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:10.009 07:16:33 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:19:10.009 07:16:33 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:19:10.009 07:16:33 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:19:10.009 07:16:33 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:19:10.009 07:16:33 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:19:10.009 07:16:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:10.009 07:16:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.009 07:16:33 -- common/autotest_common.sh@10 -- # set +x 00:19:10.009 ************************************ 00:19:10.009 START TEST nvmf_tcp 00:19:10.009 ************************************ 00:19:10.009 07:16:33 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:19:10.009 * Looking for test storage... 00:19:10.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:10.009 07:16:34 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:10.009 07:16:34 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:19:10.009 07:16:34 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:10.009 07:16:34 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.009 07:16:34 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:19:10.009 07:16:34 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.009 07:16:34 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:10.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.009 --rc genhtml_branch_coverage=1 00:19:10.009 --rc genhtml_function_coverage=1 00:19:10.009 --rc genhtml_legend=1 00:19:10.009 --rc geninfo_all_blocks=1 00:19:10.009 --rc geninfo_unexecuted_blocks=1 00:19:10.009 00:19:10.009 ' 00:19:10.009 07:16:34 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:10.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.009 --rc genhtml_branch_coverage=1 00:19:10.009 --rc genhtml_function_coverage=1 00:19:10.009 --rc genhtml_legend=1 00:19:10.009 --rc geninfo_all_blocks=1 00:19:10.009 --rc geninfo_unexecuted_blocks=1 00:19:10.009 00:19:10.009 ' 00:19:10.009 07:16:34 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:10.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.009 --rc genhtml_branch_coverage=1 00:19:10.009 --rc genhtml_function_coverage=1 00:19:10.009 --rc genhtml_legend=1 00:19:10.009 --rc geninfo_all_blocks=1 00:19:10.009 --rc geninfo_unexecuted_blocks=1 00:19:10.009 00:19:10.009 ' 00:19:10.009 07:16:34 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:10.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.009 --rc genhtml_branch_coverage=1 00:19:10.009 --rc genhtml_function_coverage=1 00:19:10.009 --rc genhtml_legend=1 00:19:10.009 --rc geninfo_all_blocks=1 00:19:10.009 --rc geninfo_unexecuted_blocks=1 00:19:10.009 00:19:10.009 ' 00:19:10.009 07:16:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:19:10.009 07:16:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:19:10.009 07:16:34 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:19:10.009 07:16:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:10.009 07:16:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.009 07:16:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:10.009 ************************************ 00:19:10.009 START TEST nvmf_target_core 00:19:10.009 ************************************ 00:19:10.009 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:19:10.009 * Looking for test storage... 00:19:10.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:10.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.268 --rc genhtml_branch_coverage=1 00:19:10.268 --rc genhtml_function_coverage=1 00:19:10.268 --rc genhtml_legend=1 00:19:10.268 --rc geninfo_all_blocks=1 00:19:10.268 --rc geninfo_unexecuted_blocks=1 00:19:10.268 00:19:10.268 ' 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:10.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.268 --rc genhtml_branch_coverage=1 00:19:10.268 --rc genhtml_function_coverage=1 00:19:10.268 --rc genhtml_legend=1 00:19:10.268 --rc geninfo_all_blocks=1 00:19:10.268 --rc geninfo_unexecuted_blocks=1 00:19:10.268 00:19:10.268 ' 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:10.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.268 --rc genhtml_branch_coverage=1 00:19:10.268 --rc genhtml_function_coverage=1 00:19:10.268 --rc genhtml_legend=1 00:19:10.268 --rc geninfo_all_blocks=1 00:19:10.268 --rc geninfo_unexecuted_blocks=1 00:19:10.268 00:19:10.268 ' 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:10.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.268 --rc genhtml_branch_coverage=1 00:19:10.268 --rc genhtml_function_coverage=1 00:19:10.268 --rc genhtml_legend=1 00:19:10.268 --rc geninfo_all_blocks=1 00:19:10.268 --rc geninfo_unexecuted_blocks=1 00:19:10.268 00:19:10.268 ' 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:10.268 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:10.269 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:10.269 ************************************ 00:19:10.269 START TEST nvmf_host_management 00:19:10.269 ************************************ 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:19:10.269 * Looking for test storage... 00:19:10.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:10.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.269 --rc genhtml_branch_coverage=1 00:19:10.269 --rc genhtml_function_coverage=1 00:19:10.269 --rc genhtml_legend=1 00:19:10.269 --rc geninfo_all_blocks=1 00:19:10.269 --rc geninfo_unexecuted_blocks=1 00:19:10.269 00:19:10.269 ' 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:10.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.269 --rc genhtml_branch_coverage=1 00:19:10.269 --rc genhtml_function_coverage=1 00:19:10.269 --rc genhtml_legend=1 00:19:10.269 --rc geninfo_all_blocks=1 00:19:10.269 --rc geninfo_unexecuted_blocks=1 00:19:10.269 00:19:10.269 ' 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:10.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.269 --rc genhtml_branch_coverage=1 00:19:10.269 --rc genhtml_function_coverage=1 00:19:10.269 --rc genhtml_legend=1 00:19:10.269 --rc geninfo_all_blocks=1 00:19:10.269 --rc geninfo_unexecuted_blocks=1 00:19:10.269 00:19:10.269 ' 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:10.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.269 --rc genhtml_branch_coverage=1 00:19:10.269 --rc genhtml_function_coverage=1 00:19:10.269 --rc genhtml_legend=1 00:19:10.269 --rc geninfo_all_blocks=1 00:19:10.269 --rc geninfo_unexecuted_blocks=1 00:19:10.269 00:19:10.269 ' 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.269 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.270 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.270 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.270 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.270 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:10.270 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.270 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:10.530 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@280 -- # nvmf_veth_init 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@223 -- # create_target_ns 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:10.530 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # create_main_bridge 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@105 -- # delete_main_bridge 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up initiator0 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up target0 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0 up 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up target0_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns target0 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:19:10.531 10.0.0.1 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:10.531 10.0.0.2 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up initiator0 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:10.531 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up target0_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up initiator1 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up target1 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1 up 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up target1_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns target1 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772163 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:19:10.532 10.0.0.3 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772164 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:10.532 10.0.0.4 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up initiator1 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:10.532 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up target1_br 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:10.533 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator0 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:10.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:19:10.792 00:19:10.792 --- 10.0.0.1 ping statistics --- 00:19:10.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.792 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target0 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target0 00:19:10.792 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:10.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:19:10.793 00:19:10.793 --- 10.0.0.2 ping statistics --- 00:19:10.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.793 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:10.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:10.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:19:10.793 00:19:10.793 --- 10.0.0.3 ping statistics --- 00:19:10.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.793 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:19:10.793 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:10.793 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:19:10.793 00:19:10.793 --- 10.0.0.4 ping statistics --- 00:19:10.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.793 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # return 0 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator0 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:10.793 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target0 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target0 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target1 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:19:10.794 ' 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:10.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=61381 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 61381 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 61381 ']' 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.794 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:10.794 [2024-11-20 07:16:34.885554] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:10.794 [2024-11-20 07:16:34.885605] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.053 [2024-11-20 07:16:35.024909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.053 [2024-11-20 07:16:35.064964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.053 [2024-11-20 07:16:35.065164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.053 [2024-11-20 07:16:35.065284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.053 [2024-11-20 07:16:35.065313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.053 [2024-11-20 07:16:35.065328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.053 [2024-11-20 07:16:35.066072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.053 [2024-11-20 07:16:35.066162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.053 [2024-11-20 07:16:35.066212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.053 [2024-11-20 07:16:35.066213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:11.053 [2024-11-20 07:16:35.098415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:11.619 [2024-11-20 07:16:35.756829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.619 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:11.619 Malloc0 00:19:11.878 [2024-11-20 07:16:35.826597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.878 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.878 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:19:11.878 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.878 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:11.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.878 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=61435 00:19:11.878 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 61435 /var/tmp/bdevperf.sock 00:19:11.878 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 61435 ']' 00:19:11.878 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.878 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:19:11.879 { 00:19:11.879 "params": { 00:19:11.879 "name": "Nvme$subsystem", 00:19:11.879 "trtype": "$TEST_TRANSPORT", 00:19:11.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:11.879 "adrfam": "ipv4", 00:19:11.879 "trsvcid": "$NVMF_PORT", 00:19:11.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:11.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:11.879 "hdgst": ${hdgst:-false}, 00:19:11.879 "ddgst": ${ddgst:-false} 00:19:11.879 }, 00:19:11.879 "method": "bdev_nvme_attach_controller" 00:19:11.879 } 00:19:11.879 EOF 00:19:11.879 )") 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:19:11.879 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:19:11.879 "params": { 00:19:11.879 "name": "Nvme0", 00:19:11.879 "trtype": "tcp", 00:19:11.879 "traddr": "10.0.0.2", 00:19:11.879 "adrfam": "ipv4", 00:19:11.879 "trsvcid": "4420", 00:19:11.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:11.879 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:11.879 "hdgst": false, 00:19:11.879 "ddgst": false 00:19:11.879 }, 00:19:11.879 "method": "bdev_nvme_attach_controller" 00:19:11.879 }' 00:19:11.879 [2024-11-20 07:16:35.896414] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:11.879 [2024-11-20 07:16:35.896480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61435 ] 00:19:11.879 [2024-11-20 07:16:36.032198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.879 [2024-11-20 07:16:36.066167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.137 [2024-11-20 07:16:36.104046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:12.137 Running I/O for 10 seconds... 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:19:12.703 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1411 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1411 -ge 100 ']' 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.704 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:12.963 [2024-11-20 07:16:36.903681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.963 [2024-11-20 07:16:36.903721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.903730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.963 [2024-11-20 07:16:36.903736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.903743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.963 [2024-11-20 07:16:36.903749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.903756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.963 [2024-11-20 07:16:36.903761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.903767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1420ce0 is same with the state(6) to be set 00:19:12.963 [2024-11-20 07:16:36.905975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.905999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.963 [2024-11-20 07:16:36.906345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.963 [2024-11-20 07:16:36.906351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.964 [2024-11-20 07:16:36.906820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.964 [2024-11-20 07:16:36.906825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.965 [2024-11-20 07:16:36.906835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.965 [2024-11-20 07:16:36.906840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.965 [2024-11-20 07:16:36.906848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.965 [2024-11-20 07:16:36.906854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.965 [2024-11-20 07:16:36.906861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.965 [2024-11-20 07:16:36.906867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.965 [2024-11-20 07:16:36.906873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141b2d0 is same with the state(6) to be set 00:19:12.965 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.965 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:12.965 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.965 [2024-11-20 07:16:36.908057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:12.965 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:12.965 task offset: 65408 on job bdev=Nvme0n1 fails 00:19:12.965 00:19:12.965 Latency(us) 00:19:12.965 [2024-11-20T07:16:37.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.965 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:12.965 Job: Nvme0n1 ended in about 0.70 seconds with error 00:19:12.965 Verification LBA range: start 0x0 length 0x400 00:19:12.965 Nvme0n1 : 0.70 2112.94 132.06 91.87 0.00 28512.70 5268.09 27222.65 00:19:12.965 [2024-11-20T07:16:37.168Z] =================================================================================================================== 00:19:12.965 [2024-11-20T07:16:37.168Z] Total : 2112.94 132.06 91.87 0.00 28512.70 5268.09 27222.65 00:19:12.965 [2024-11-20 07:16:36.910035] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:12.965 [2024-11-20 07:16:36.910054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1420ce0 (9): Bad file descriptor 00:19:12.965 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.965 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:19:12.965 [2024-11-20 07:16:36.918460] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 61435 00:19:13.903 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (61435) - No such process 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:19:13.903 { 00:19:13.903 "params": { 00:19:13.903 "name": "Nvme$subsystem", 00:19:13.903 "trtype": "$TEST_TRANSPORT", 00:19:13.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:13.903 "adrfam": "ipv4", 00:19:13.903 "trsvcid": "$NVMF_PORT", 00:19:13.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:13.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:13.903 "hdgst": ${hdgst:-false}, 00:19:13.903 "ddgst": ${ddgst:-false} 00:19:13.903 }, 00:19:13.903 "method": "bdev_nvme_attach_controller" 00:19:13.903 } 00:19:13.903 EOF 00:19:13.903 )") 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:19:13.903 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:19:13.903 "params": { 00:19:13.903 "name": "Nvme0", 00:19:13.903 "trtype": "tcp", 00:19:13.903 "traddr": "10.0.0.2", 00:19:13.903 "adrfam": "ipv4", 00:19:13.903 "trsvcid": "4420", 00:19:13.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:13.903 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:13.903 "hdgst": false, 00:19:13.903 "ddgst": false 00:19:13.903 }, 00:19:13.903 "method": "bdev_nvme_attach_controller" 00:19:13.903 }' 00:19:13.903 [2024-11-20 07:16:37.954993] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:13.904 [2024-11-20 07:16:37.955058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61473 ] 00:19:13.904 [2024-11-20 07:16:38.096626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.163 [2024-11-20 07:16:38.136092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.163 [2024-11-20 07:16:38.177064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:14.163 Running I/O for 1 seconds... 00:19:15.539 1920.00 IOPS, 120.00 MiB/s 00:19:15.539 Latency(us) 00:19:15.539 [2024-11-20T07:16:39.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.539 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:15.539 Verification LBA range: start 0x0 length 0x400 00:19:15.539 Nvme0n1 : 1.03 1929.70 120.61 0.00 0.00 32584.65 3201.18 30045.74 00:19:15.539 [2024-11-20T07:16:39.742Z] =================================================================================================================== 00:19:15.539 [2024-11-20T07:16:39.742Z] Total : 1929.70 120.61 0.00 0.00 32584.65 3201.18 30045.74 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:15.539 rmmod nvme_tcp 00:19:15.539 rmmod nvme_fabrics 00:19:15.539 rmmod nvme_keyring 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 61381 ']' 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 61381 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 61381 ']' 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 61381 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61381 00:19:15.539 killing process with pid 61381 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61381' 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 61381 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 61381 00:19:15.539 [2024-11-20 07:16:39.664214] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:19:15.539 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # continue 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # continue 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:15.797 ************************************ 00:19:15.797 END TEST nvmf_host_management 00:19:15.797 ************************************ 00:19:15.797 00:19:15.797 real 0m5.517s 00:19:15.797 user 0m21.053s 00:19:15.797 sys 0m1.183s 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:15.797 ************************************ 00:19:15.797 START TEST nvmf_lvol 00:19:15.797 ************************************ 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:15.797 * Looking for test storage... 00:19:15.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:19:15.797 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.057 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:19:16.057 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:19:16.057 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.057 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:19:16.057 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.057 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:16.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.057 --rc genhtml_branch_coverage=1 00:19:16.057 --rc genhtml_function_coverage=1 00:19:16.057 --rc genhtml_legend=1 00:19:16.057 --rc geninfo_all_blocks=1 00:19:16.057 --rc geninfo_unexecuted_blocks=1 00:19:16.057 00:19:16.057 ' 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:16.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.057 --rc genhtml_branch_coverage=1 00:19:16.057 --rc genhtml_function_coverage=1 00:19:16.057 --rc genhtml_legend=1 00:19:16.057 --rc geninfo_all_blocks=1 00:19:16.057 --rc geninfo_unexecuted_blocks=1 00:19:16.057 00:19:16.057 ' 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:16.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.057 --rc genhtml_branch_coverage=1 00:19:16.057 --rc genhtml_function_coverage=1 00:19:16.057 --rc genhtml_legend=1 00:19:16.057 --rc geninfo_all_blocks=1 00:19:16.057 --rc geninfo_unexecuted_blocks=1 00:19:16.057 00:19:16.057 ' 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:16.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.057 --rc genhtml_branch_coverage=1 00:19:16.057 --rc genhtml_function_coverage=1 00:19:16.057 --rc genhtml_legend=1 00:19:16.057 --rc geninfo_all_blocks=1 00:19:16.057 --rc geninfo_unexecuted_blocks=1 00:19:16.057 00:19:16.057 ' 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:16.057 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:16.058 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@280 -- # nvmf_veth_init 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@223 -- # create_target_ns 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # create_main_bridge 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@105 -- # delete_main_bridge 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up initiator0 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up target0 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0 up 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up target0_br 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns target0 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:16.058 10.0.0.1 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:16.058 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:16.059 10.0.0.2 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up initiator0 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up target0_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up initiator1 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up target1 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1 up 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up target1_br 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns target1 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772163 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:19:16.059 10.0.0.3 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772164 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:16.059 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:16.060 10.0.0.4 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up initiator1 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up target1_br 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator0 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:16.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:19:16.060 00:19:16.060 --- 10.0.0.1 ping statistics --- 00:19:16.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.060 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:16.060 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target0 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target0 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:16.319 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:16.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:19:16.319 00:19:16.319 --- 10.0.0.2 ping statistics --- 00:19:16.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.320 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:16.320 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:16.320 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:19:16.320 00:19:16.320 --- 10.0.0.3 ping statistics --- 00:19:16.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.320 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:19:16.320 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:16.320 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:19:16.320 00:19:16.320 --- 10.0.0.4 ping statistics --- 00:19:16.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.320 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # return 0 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator0 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target0 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target0 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:16.320 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target1 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target1 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:19:16.321 ' 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=61732 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 61732 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 61732 ']' 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.321 07:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:16.321 [2024-11-20 07:16:40.403266] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:16.321 [2024-11-20 07:16:40.403325] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.730 [2024-11-20 07:16:40.545158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:16.730 [2024-11-20 07:16:40.580501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.730 [2024-11-20 07:16:40.580541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.730 [2024-11-20 07:16:40.580547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.730 [2024-11-20 07:16:40.580552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.730 [2024-11-20 07:16:40.580558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.730 [2024-11-20 07:16:40.581329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.730 [2024-11-20 07:16:40.581243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.730 [2024-11-20 07:16:40.581315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.730 [2024-11-20 07:16:40.612583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:17.305 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.305 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:19:17.305 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:17.305 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:17.305 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:17.305 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.305 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:17.305 [2024-11-20 07:16:41.489008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.562 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.562 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:19:17.562 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.820 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:19:17.820 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:19:18.077 07:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:19:18.334 07:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0775a74f-cff2-49f2-99a5-c8e2397a9edb 00:19:18.334 07:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0775a74f-cff2-49f2-99a5-c8e2397a9edb lvol 20 00:19:18.591 07:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=79203721-b169-4a14-8a75-db4ccf77a230 00:19:18.591 07:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:18.847 07:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 79203721-b169-4a14-8a75-db4ccf77a230 00:19:18.847 07:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:19.104 [2024-11-20 07:16:43.215615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.104 07:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:19.363 07:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=61802 00:19:19.363 07:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:19:19.363 07:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:19:20.296 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 79203721-b169-4a14-8a75-db4ccf77a230 MY_SNAPSHOT 00:19:20.552 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=021308cb-69d8-46f8-847d-11405282cf06 00:19:20.552 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 79203721-b169-4a14-8a75-db4ccf77a230 30 00:19:20.808 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 021308cb-69d8-46f8-847d-11405282cf06 MY_CLONE 00:19:21.064 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8ef250cf-eba8-483b-8008-d7df2f061dca 00:19:21.065 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 8ef250cf-eba8-483b-8008-d7df2f061dca 00:19:21.321 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 61802 00:19:31.292 Initializing NVMe Controllers 00:19:31.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:31.292 Controller IO queue size 128, less than required. 00:19:31.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:31.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:19:31.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:19:31.292 Initialization complete. Launching workers. 00:19:31.292 ======================================================== 00:19:31.292 Latency(us) 00:19:31.292 Device Information : IOPS MiB/s Average min max 00:19:31.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15432.20 60.28 8298.28 1482.18 32032.26 00:19:31.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16047.30 62.68 7980.83 344.07 32309.83 00:19:31.292 ======================================================== 00:19:31.292 Total : 31479.49 122.97 8136.46 344.07 32309.83 00:19:31.292 00:19:31.292 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:31.292 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 79203721-b169-4a14-8a75-db4ccf77a230 00:19:31.292 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0775a74f-cff2-49f2-99a5-c8e2397a9edb 00:19:31.292 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:19:31.292 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:19:31.292 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:19:31.292 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:31.293 rmmod nvme_tcp 00:19:31.293 rmmod nvme_fabrics 00:19:31.293 rmmod nvme_keyring 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 61732 ']' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 61732 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 61732 ']' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 61732 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61732 00:19:31.293 killing process with pid 61732 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61732' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 61732 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 61732 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # continue 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # continue 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:31.293 00:19:31.293 real 0m14.847s 00:19:31.293 user 1m2.540s 00:19:31.293 sys 0m3.576s 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:31.293 ************************************ 00:19:31.293 END TEST nvmf_lvol 00:19:31.293 ************************************ 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:31.293 ************************************ 00:19:31.293 START TEST nvmf_lvs_grow 00:19:31.293 ************************************ 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:31.293 * Looking for test storage... 00:19:31.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.293 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:31.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.293 --rc genhtml_branch_coverage=1 00:19:31.293 --rc genhtml_function_coverage=1 00:19:31.294 --rc genhtml_legend=1 00:19:31.294 --rc geninfo_all_blocks=1 00:19:31.294 --rc geninfo_unexecuted_blocks=1 00:19:31.294 00:19:31.294 ' 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:31.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.294 --rc genhtml_branch_coverage=1 00:19:31.294 --rc genhtml_function_coverage=1 00:19:31.294 --rc genhtml_legend=1 00:19:31.294 --rc geninfo_all_blocks=1 00:19:31.294 --rc geninfo_unexecuted_blocks=1 00:19:31.294 00:19:31.294 ' 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:31.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.294 --rc genhtml_branch_coverage=1 00:19:31.294 --rc genhtml_function_coverage=1 00:19:31.294 --rc genhtml_legend=1 00:19:31.294 --rc geninfo_all_blocks=1 00:19:31.294 --rc geninfo_unexecuted_blocks=1 00:19:31.294 00:19:31.294 ' 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:31.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.294 --rc genhtml_branch_coverage=1 00:19:31.294 --rc genhtml_function_coverage=1 00:19:31.294 --rc genhtml_legend=1 00:19:31.294 --rc geninfo_all_blocks=1 00:19:31.294 --rc geninfo_unexecuted_blocks=1 00:19:31.294 00:19:31.294 ' 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:31.294 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@280 -- # nvmf_veth_init 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@223 -- # create_target_ns 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # create_main_bridge 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@105 -- # delete_main_bridge 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:19:31.294 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up initiator0 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up target0 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0 up 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up target0_br 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns target0 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:31.295 10.0.0.1 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:31.295 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:31.295 10.0.0.2 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up initiator0 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up target0_br 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:31.295 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up initiator1 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up target1 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1 up 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up target1_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns target1 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772163 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:19:31.296 10.0.0.3 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772164 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:19:31.296 10.0.0.4 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up initiator1 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up target1_br 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:31.296 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator0 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:31.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:19:31.297 00:19:31.297 --- 10.0.0.1 ping statistics --- 00:19:31.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.297 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target0 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target0 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:31.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:19:31.297 00:19:31.297 --- 10.0.0.2 ping statistics --- 00:19:31.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.297 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:31.297 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:31.297 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:31.297 00:19:31.297 --- 10.0.0.3 ping statistics --- 00:19:31.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.297 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:31.297 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:19:31.298 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:31.298 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:19:31.298 00:19:31.298 --- 10.0.0.4 ping statistics --- 00:19:31.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.298 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # return 0 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator0 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target0 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target0 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:19:31.298 ' 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.298 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:31.299 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=62170 00:19:31.299 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 62170 00:19:31.299 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:31.299 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 62170 ']' 00:19:31.299 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.299 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.299 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.299 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.299 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:31.299 [2024-11-20 07:16:55.291751] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:31.299 [2024-11-20 07:16:55.291812] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.299 [2024-11-20 07:16:55.425554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.299 [2024-11-20 07:16:55.461530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.299 [2024-11-20 07:16:55.461584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.299 [2024-11-20 07:16:55.461591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.299 [2024-11-20 07:16:55.461596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.299 [2024-11-20 07:16:55.461600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.299 [2024-11-20 07:16:55.461907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.558 [2024-11-20 07:16:55.492911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.125 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.125 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:19:32.125 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:32.125 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.125 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:32.125 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.125 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:32.383 [2024-11-20 07:16:56.397078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:32.383 ************************************ 00:19:32.383 START TEST lvs_grow_clean 00:19:32.383 ************************************ 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:32.383 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:32.650 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:32.650 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:32.908 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bfb42ff8-a9e9-4cde-aad3-ff5528748d0e 00:19:32.908 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:32.908 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfb42ff8-a9e9-4cde-aad3-ff5528748d0e 00:19:32.908 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:32.908 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:32.908 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bfb42ff8-a9e9-4cde-aad3-ff5528748d0e lvol 150 00:19:33.165 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3ba2fd56-0347-4173-879c-dec641a9ef71 00:19:33.165 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:33.165 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:33.423 [2024-11-20 07:16:57.484476] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:33.423 [2024-11-20 07:16:57.484540] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:33.423 true 00:19:33.423 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfb42ff8-a9e9-4cde-aad3-ff5528748d0e 00:19:33.423 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:33.680 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:33.680 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:33.680 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3ba2fd56-0347-4173-879c-dec641a9ef71 00:19:33.937 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:34.196 [2024-11-20 07:16:58.248892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.196 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:34.454 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=62255 00:19:34.454 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:34.454 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 62255 /var/tmp/bdevperf.sock 00:19:34.454 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:34.454 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 62255 ']' 00:19:34.454 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.454 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.454 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.454 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.454 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:19:34.454 [2024-11-20 07:16:58.473232] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:34.454 [2024-11-20 07:16:58.473301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62255 ] 00:19:34.454 [2024-11-20 07:16:58.610959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.454 [2024-11-20 07:16:58.648293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.713 [2024-11-20 07:16:58.679699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:35.277 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.277 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:19:35.277 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:35.534 Nvme0n1 00:19:35.534 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:35.791 [ 00:19:35.791 { 00:19:35.791 "name": "Nvme0n1", 00:19:35.791 "aliases": [ 00:19:35.791 "3ba2fd56-0347-4173-879c-dec641a9ef71" 00:19:35.791 ], 00:19:35.791 "product_name": "NVMe disk", 00:19:35.791 "block_size": 4096, 00:19:35.791 "num_blocks": 38912, 00:19:35.791 "uuid": "3ba2fd56-0347-4173-879c-dec641a9ef71", 00:19:35.791 "numa_id": -1, 00:19:35.791 "assigned_rate_limits": { 00:19:35.791 "rw_ios_per_sec": 0, 00:19:35.791 "rw_mbytes_per_sec": 0, 00:19:35.791 "r_mbytes_per_sec": 0, 00:19:35.791 "w_mbytes_per_sec": 0 00:19:35.791 }, 00:19:35.791 "claimed": false, 00:19:35.791 "zoned": false, 00:19:35.791 "supported_io_types": { 00:19:35.791 "read": true, 00:19:35.791 "write": true, 00:19:35.791 "unmap": true, 00:19:35.791 "flush": true, 00:19:35.791 "reset": true, 00:19:35.791 "nvme_admin": true, 00:19:35.791 "nvme_io": true, 00:19:35.791 "nvme_io_md": false, 00:19:35.791 "write_zeroes": true, 00:19:35.791 "zcopy": false, 00:19:35.791 "get_zone_info": false, 00:19:35.791 "zone_management": false, 00:19:35.791 "zone_append": false, 00:19:35.791 "compare": true, 00:19:35.791 "compare_and_write": true, 00:19:35.791 "abort": true, 00:19:35.791 "seek_hole": false, 00:19:35.791 "seek_data": false, 00:19:35.791 "copy": true, 00:19:35.791 "nvme_iov_md": false 00:19:35.791 }, 00:19:35.791 "memory_domains": [ 00:19:35.791 { 00:19:35.791 "dma_device_id": "system", 00:19:35.791 "dma_device_type": 1 00:19:35.791 } 00:19:35.791 ], 00:19:35.791 "driver_specific": { 00:19:35.791 "nvme": [ 00:19:35.791 { 00:19:35.791 "trid": { 00:19:35.791 "trtype": "TCP", 00:19:35.791 "adrfam": "IPv4", 00:19:35.791 "traddr": "10.0.0.2", 00:19:35.791 "trsvcid": "4420", 00:19:35.791 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:35.791 }, 00:19:35.791 "ctrlr_data": { 00:19:35.791 "cntlid": 1, 00:19:35.791 "vendor_id": "0x8086", 00:19:35.791 "model_number": "SPDK bdev Controller", 00:19:35.792 "serial_number": "SPDK0", 00:19:35.792 "firmware_revision": "25.01", 00:19:35.792 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:35.792 "oacs": { 00:19:35.792 "security": 0, 00:19:35.792 "format": 0, 00:19:35.792 "firmware": 0, 00:19:35.792 "ns_manage": 0 00:19:35.792 }, 00:19:35.792 "multi_ctrlr": true, 00:19:35.792 "ana_reporting": false 00:19:35.792 }, 00:19:35.792 "vs": { 00:19:35.792 "nvme_version": "1.3" 00:19:35.792 }, 00:19:35.792 "ns_data": { 00:19:35.792 "id": 1, 00:19:35.792 "can_share": true 00:19:35.792 } 00:19:35.792 } 00:19:35.792 ], 00:19:35.792 "mp_policy": "active_passive" 00:19:35.792 } 00:19:35.792 } 00:19:35.792 ] 00:19:35.792 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=62278 00:19:35.792 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:35.792 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:35.792 Running I/O for 10 seconds... 00:19:36.725 Latency(us) 00:19:36.725 [2024-11-20T07:17:00.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:36.725 Nvme0n1 : 1.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:19:36.725 [2024-11-20T07:17:00.928Z] =================================================================================================================== 00:19:36.725 [2024-11-20T07:17:00.928Z] Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:19:36.725 00:19:37.658 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bfb42ff8-a9e9-4cde-aad3-ff5528748d0e 00:19:37.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:37.658 Nvme0n1 : 2.00 10613.00 41.46 0.00 0.00 0.00 0.00 0.00 00:19:37.658 [2024-11-20T07:17:01.861Z] =================================================================================================================== 00:19:37.658 [2024-11-20T07:17:01.861Z] Total : 10613.00 41.46 0.00 0.00 0.00 0.00 0.00 00:19:37.658 00:19:37.916 true 00:19:37.916 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfb42ff8-a9e9-4cde-aad3-ff5528748d0e 00:19:37.916 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:38.174 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:38.174 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:38.174 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 62278 00:19:38.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:38.739 Nvme0n1 : 3.00 10629.00 41.52 0.00 0.00 0.00 0.00 0.00 00:19:38.739 [2024-11-20T07:17:02.942Z] =================================================================================================================== 00:19:38.739 [2024-11-20T07:17:02.942Z] Total : 10629.00 41.52 0.00 0.00 0.00 0.00 0.00 00:19:38.739 00:19:39.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:39.671 Nvme0n1 : 4.00 10765.75 42.05 0.00 0.00 0.00 0.00 0.00 00:19:39.671 [2024-11-20T07:17:03.874Z] =================================================================================================================== 00:19:39.671 [2024-11-20T07:17:03.874Z] Total : 10765.75 42.05 0.00 0.00 0.00 0.00 0.00 00:19:39.671 00:19:41.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:41.043 Nvme0n1 : 5.00 10797.00 42.18 0.00 0.00 0.00 0.00 0.00 00:19:41.043 [2024-11-20T07:17:05.246Z] =================================================================================================================== 00:19:41.043 [2024-11-20T07:17:05.246Z] Total : 10797.00 42.18 0.00 0.00 0.00 0.00 0.00 00:19:41.043 00:19:41.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:41.975 Nvme0n1 : 6.00 10817.17 42.25 0.00 0.00 0.00 0.00 0.00 00:19:41.975 [2024-11-20T07:17:06.178Z] =================================================================================================================== 00:19:41.975 [2024-11-20T07:17:06.178Z] Total : 10817.17 42.25 0.00 0.00 0.00 0.00 0.00 00:19:41.975 00:19:42.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:42.907 Nvme0n1 : 7.00 10845.43 42.36 0.00 0.00 0.00 0.00 0.00 00:19:42.907 [2024-11-20T07:17:07.110Z] =================================================================================================================== 00:19:42.907 [2024-11-20T07:17:07.110Z] Total : 10845.43 42.36 0.00 0.00 0.00 0.00 0.00 00:19:42.907 00:19:43.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:43.840 Nvme0n1 : 8.00 10820.62 42.27 0.00 0.00 0.00 0.00 0.00 00:19:43.840 [2024-11-20T07:17:08.043Z] =================================================================================================================== 00:19:43.840 [2024-11-20T07:17:08.043Z] Total : 10820.62 42.27 0.00 0.00 0.00 0.00 0.00 00:19:43.840 00:19:44.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:44.772 Nvme0n1 : 9.00 10829.56 42.30 0.00 0.00 0.00 0.00 0.00 00:19:44.772 [2024-11-20T07:17:08.975Z] =================================================================================================================== 00:19:44.772 [2024-11-20T07:17:08.975Z] Total : 10829.56 42.30 0.00 0.00 0.00 0.00 0.00 00:19:44.772 00:19:45.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:45.702 Nvme0n1 : 10.00 10816.80 42.25 0.00 0.00 0.00 0.00 0.00 00:19:45.702 [2024-11-20T07:17:09.905Z] =================================================================================================================== 00:19:45.702 [2024-11-20T07:17:09.905Z] Total : 10816.80 42.25 0.00 0.00 0.00 0.00 0.00 00:19:45.702 00:19:45.702 00:19:45.702 Latency(us) 00:19:45.702 [2024-11-20T07:17:09.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:45.702 Nvme0n1 : 10.01 10817.89 42.26 0.00 0.00 11828.90 4058.19 76626.71 00:19:45.702 [2024-11-20T07:17:09.905Z] =================================================================================================================== 00:19:45.702 [2024-11-20T07:17:09.905Z] Total : 10817.89 42.26 0.00 0.00 11828.90 4058.19 76626.71 00:19:45.702 { 00:19:45.702 "results": [ 00:19:45.702 { 00:19:45.702 "job": "Nvme0n1", 00:19:45.702 "core_mask": "0x2", 00:19:45.702 "workload": "randwrite", 00:19:45.702 "status": "finished", 00:19:45.702 "queue_depth": 128, 00:19:45.702 "io_size": 4096, 00:19:45.702 "runtime": 10.010827, 00:19:45.702 "iops": 10817.887473232731, 00:19:45.702 "mibps": 42.25737294231536, 00:19:45.702 "io_failed": 0, 00:19:45.702 "io_timeout": 0, 00:19:45.702 "avg_latency_us": 11828.896008546377, 00:19:45.702 "min_latency_us": 4058.190769230769, 00:19:45.702 "max_latency_us": 76626.7076923077 00:19:45.702 } 00:19:45.702 ], 00:19:45.702 "core_count": 1 00:19:45.702 } 00:19:45.702 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 62255 00:19:45.702 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 62255 ']' 00:19:45.702 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 62255 00:19:45.702 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:19:45.702 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.702 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62255 00:19:45.702 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:45.702 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:45.702 killing process with pid 62255 00:19:45.702 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62255' 00:19:45.702 Received shutdown signal, test time was about 10.000000 seconds 00:19:45.702 00:19:45.702 Latency(us) 00:19:45.702 [2024-11-20T07:17:09.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.702 [2024-11-20T07:17:09.905Z] =================================================================================================================== 00:19:45.702 [2024-11-20T07:17:09.905Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.702 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 62255 00:19:45.702 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 62255 00:19:45.960 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:46.216 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:46.216 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfb42ff8-a9e9-4cde-aad3-ff5528748d0e 00:19:46.216 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:19:46.473 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:19:46.473 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:19:46.473 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:46.732 [2024-11-20 07:17:10.685005] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfb42ff8-a9e9-4cde-aad3-ff5528748d0e 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfb42ff8-a9e9-4cde-aad3-ff5528748d0e 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfb42ff8-a9e9-4cde-aad3-ff5528748d0e 00:19:46.732 request: 00:19:46.732 { 00:19:46.732 "uuid": "bfb42ff8-a9e9-4cde-aad3-ff5528748d0e", 00:19:46.732 "method": "bdev_lvol_get_lvstores", 00:19:46.732 "req_id": 1 00:19:46.732 } 00:19:46.732 Got JSON-RPC error response 00:19:46.732 response: 00:19:46.732 { 00:19:46.732 "code": -19, 00:19:46.732 "message": "No such device" 00:19:46.732 } 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:46.732 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:46.990 aio_bdev 00:19:46.990 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3ba2fd56-0347-4173-879c-dec641a9ef71 00:19:46.990 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3ba2fd56-0347-4173-879c-dec641a9ef71 00:19:46.990 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:46.990 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:19:46.990 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:46.990 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:46.990 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:47.248 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3ba2fd56-0347-4173-879c-dec641a9ef71 -t 2000 00:19:47.506 [ 00:19:47.506 { 00:19:47.506 "name": "3ba2fd56-0347-4173-879c-dec641a9ef71", 00:19:47.506 "aliases": [ 00:19:47.506 "lvs/lvol" 00:19:47.506 ], 00:19:47.506 "product_name": "Logical Volume", 00:19:47.506 "block_size": 4096, 00:19:47.506 "num_blocks": 38912, 00:19:47.506 "uuid": "3ba2fd56-0347-4173-879c-dec641a9ef71", 00:19:47.506 "assigned_rate_limits": { 00:19:47.506 "rw_ios_per_sec": 0, 00:19:47.506 "rw_mbytes_per_sec": 0, 00:19:47.506 "r_mbytes_per_sec": 0, 00:19:47.506 "w_mbytes_per_sec": 0 00:19:47.506 }, 00:19:47.506 "claimed": false, 00:19:47.506 "zoned": false, 00:19:47.506 "supported_io_types": { 00:19:47.506 "read": true, 00:19:47.506 "write": true, 00:19:47.506 "unmap": true, 00:19:47.506 "flush": false, 00:19:47.506 "reset": true, 00:19:47.506 "nvme_admin": false, 00:19:47.506 "nvme_io": false, 00:19:47.506 "nvme_io_md": false, 00:19:47.506 "write_zeroes": true, 00:19:47.506 "zcopy": false, 00:19:47.506 "get_zone_info": false, 00:19:47.506 "zone_management": false, 00:19:47.506 "zone_append": false, 00:19:47.506 "compare": false, 00:19:47.506 "compare_and_write": false, 00:19:47.506 "abort": false, 00:19:47.506 "seek_hole": true, 00:19:47.506 "seek_data": true, 00:19:47.506 "copy": false, 00:19:47.507 "nvme_iov_md": false 00:19:47.507 }, 00:19:47.507 "driver_specific": { 00:19:47.507 "lvol": { 00:19:47.507 "lvol_store_uuid": "bfb42ff8-a9e9-4cde-aad3-ff5528748d0e", 00:19:47.507 "base_bdev": "aio_bdev", 00:19:47.507 "thin_provision": false, 00:19:47.507 "num_allocated_clusters": 38, 00:19:47.507 "snapshot": false, 00:19:47.507 "clone": false, 00:19:47.507 "esnap_clone": false 00:19:47.507 } 00:19:47.507 } 00:19:47.507 } 00:19:47.507 ] 00:19:47.507 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:19:47.507 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfb42ff8-a9e9-4cde-aad3-ff5528748d0e 00:19:47.507 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:19:47.765 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:19:47.765 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:19:47.765 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfb42ff8-a9e9-4cde-aad3-ff5528748d0e 00:19:47.765 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:19:47.765 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3ba2fd56-0347-4173-879c-dec641a9ef71 00:19:48.022 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bfb42ff8-a9e9-4cde-aad3-ff5528748d0e 00:19:48.279 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:48.536 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:48.793 00:19:48.793 real 0m16.466s 00:19:48.793 user 0m15.499s 00:19:48.793 sys 0m1.927s 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.793 ************************************ 00:19:48.793 END TEST lvs_grow_clean 00:19:48.793 ************************************ 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:48.793 ************************************ 00:19:48.793 START TEST lvs_grow_dirty 00:19:48.793 ************************************ 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:48.793 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:49.050 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:49.050 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:49.307 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:19:49.307 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:49.307 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:19:49.564 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:49.564 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:49.564 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 lvol 150 00:19:49.821 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6100ae61-45ee-455f-98b1-551f13535b02 00:19:49.821 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:49.821 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:49.821 [2024-11-20 07:17:13.953357] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:49.821 [2024-11-20 07:17:13.953413] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:49.821 true 00:19:49.821 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:19:49.821 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:50.079 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:50.079 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:50.336 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6100ae61-45ee-455f-98b1-551f13535b02 00:19:50.593 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:50.593 [2024-11-20 07:17:14.749707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.593 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:50.851 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:50.851 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=62509 00:19:50.851 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.851 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 62509 /var/tmp/bdevperf.sock 00:19:50.851 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 62509 ']' 00:19:50.851 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.851 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.851 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.851 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.851 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:19:50.851 [2024-11-20 07:17:14.991667] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:50.852 [2024-11-20 07:17:14.991731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62509 ] 00:19:51.109 [2024-11-20 07:17:15.128253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.109 [2024-11-20 07:17:15.160031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.109 [2024-11-20 07:17:15.188424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:51.674 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.674 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:19:51.674 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:51.931 Nvme0n1 00:19:52.197 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:52.197 [ 00:19:52.197 { 00:19:52.197 "name": "Nvme0n1", 00:19:52.197 "aliases": [ 00:19:52.197 "6100ae61-45ee-455f-98b1-551f13535b02" 00:19:52.197 ], 00:19:52.197 "product_name": "NVMe disk", 00:19:52.197 "block_size": 4096, 00:19:52.197 "num_blocks": 38912, 00:19:52.197 "uuid": "6100ae61-45ee-455f-98b1-551f13535b02", 00:19:52.197 "numa_id": -1, 00:19:52.197 "assigned_rate_limits": { 00:19:52.197 "rw_ios_per_sec": 0, 00:19:52.197 "rw_mbytes_per_sec": 0, 00:19:52.197 "r_mbytes_per_sec": 0, 00:19:52.197 "w_mbytes_per_sec": 0 00:19:52.197 }, 00:19:52.197 "claimed": false, 00:19:52.197 "zoned": false, 00:19:52.197 "supported_io_types": { 00:19:52.197 "read": true, 00:19:52.197 "write": true, 00:19:52.197 "unmap": true, 00:19:52.197 "flush": true, 00:19:52.197 "reset": true, 00:19:52.197 "nvme_admin": true, 00:19:52.197 "nvme_io": true, 00:19:52.197 "nvme_io_md": false, 00:19:52.197 "write_zeroes": true, 00:19:52.197 "zcopy": false, 00:19:52.197 "get_zone_info": false, 00:19:52.197 "zone_management": false, 00:19:52.197 "zone_append": false, 00:19:52.197 "compare": true, 00:19:52.197 "compare_and_write": true, 00:19:52.197 "abort": true, 00:19:52.197 "seek_hole": false, 00:19:52.197 "seek_data": false, 00:19:52.197 "copy": true, 00:19:52.197 "nvme_iov_md": false 00:19:52.197 }, 00:19:52.197 "memory_domains": [ 00:19:52.197 { 00:19:52.197 "dma_device_id": "system", 00:19:52.198 "dma_device_type": 1 00:19:52.198 } 00:19:52.198 ], 00:19:52.198 "driver_specific": { 00:19:52.198 "nvme": [ 00:19:52.198 { 00:19:52.198 "trid": { 00:19:52.198 "trtype": "TCP", 00:19:52.198 "adrfam": "IPv4", 00:19:52.198 "traddr": "10.0.0.2", 00:19:52.198 "trsvcid": "4420", 00:19:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:52.198 }, 00:19:52.198 "ctrlr_data": { 00:19:52.198 "cntlid": 1, 00:19:52.198 "vendor_id": "0x8086", 00:19:52.198 "model_number": "SPDK bdev Controller", 00:19:52.198 "serial_number": "SPDK0", 00:19:52.198 "firmware_revision": "25.01", 00:19:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:52.198 "oacs": { 00:19:52.198 "security": 0, 00:19:52.198 "format": 0, 00:19:52.198 "firmware": 0, 00:19:52.198 "ns_manage": 0 00:19:52.198 }, 00:19:52.198 "multi_ctrlr": true, 00:19:52.198 "ana_reporting": false 00:19:52.198 }, 00:19:52.198 "vs": { 00:19:52.198 "nvme_version": "1.3" 00:19:52.198 }, 00:19:52.198 "ns_data": { 00:19:52.198 "id": 1, 00:19:52.198 "can_share": true 00:19:52.198 } 00:19:52.198 } 00:19:52.198 ], 00:19:52.198 "mp_policy": "active_passive" 00:19:52.198 } 00:19:52.198 } 00:19:52.198 ] 00:19:52.198 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=62527 00:19:52.198 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:52.198 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:52.489 Running I/O for 10 seconds... 00:19:53.420 Latency(us) 00:19:53.420 [2024-11-20T07:17:17.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:53.420 Nvme0n1 : 1.00 11298.00 44.13 0.00 0.00 0.00 0.00 0.00 00:19:53.420 [2024-11-20T07:17:17.623Z] =================================================================================================================== 00:19:53.420 [2024-11-20T07:17:17.623Z] Total : 11298.00 44.13 0.00 0.00 0.00 0.00 0.00 00:19:53.420 00:19:54.354 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:19:54.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:54.354 Nvme0n1 : 2.00 11427.50 44.64 0.00 0.00 0.00 0.00 0.00 00:19:54.354 [2024-11-20T07:17:18.557Z] =================================================================================================================== 00:19:54.354 [2024-11-20T07:17:18.557Z] Total : 11427.50 44.64 0.00 0.00 0.00 0.00 0.00 00:19:54.354 00:19:54.354 true 00:19:54.611 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:19:54.611 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:54.611 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:54.611 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:54.611 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 62527 00:19:55.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:55.560 Nvme0n1 : 3.00 11301.33 44.15 0.00 0.00 0.00 0.00 0.00 00:19:55.560 [2024-11-20T07:17:19.763Z] =================================================================================================================== 00:19:55.560 [2024-11-20T07:17:19.763Z] Total : 11301.33 44.15 0.00 0.00 0.00 0.00 0.00 00:19:55.560 00:19:56.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:56.494 Nvme0n1 : 4.00 11270.00 44.02 0.00 0.00 0.00 0.00 0.00 00:19:56.494 [2024-11-20T07:17:20.697Z] =================================================================================================================== 00:19:56.494 [2024-11-20T07:17:20.697Z] Total : 11270.00 44.02 0.00 0.00 0.00 0.00 0.00 00:19:56.494 00:19:57.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:57.427 Nvme0n1 : 5.00 11251.20 43.95 0.00 0.00 0.00 0.00 0.00 00:19:57.427 [2024-11-20T07:17:21.630Z] =================================================================================================================== 00:19:57.427 [2024-11-20T07:17:21.630Z] Total : 11251.20 43.95 0.00 0.00 0.00 0.00 0.00 00:19:57.427 00:19:58.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:58.357 Nvme0n1 : 6.00 9946.67 38.85 0.00 0.00 0.00 0.00 0.00 00:19:58.357 [2024-11-20T07:17:22.560Z] =================================================================================================================== 00:19:58.357 [2024-11-20T07:17:22.560Z] Total : 9946.67 38.85 0.00 0.00 0.00 0.00 0.00 00:19:58.357 00:19:59.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:59.377 Nvme0n1 : 7.00 10031.43 39.19 0.00 0.00 0.00 0.00 0.00 00:19:59.377 [2024-11-20T07:17:23.580Z] =================================================================================================================== 00:19:59.377 [2024-11-20T07:17:23.580Z] Total : 10031.43 39.19 0.00 0.00 0.00 0.00 0.00 00:19:59.377 00:20:00.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:00.309 Nvme0n1 : 8.00 9940.25 38.83 0.00 0.00 0.00 0.00 0.00 00:20:00.309 [2024-11-20T07:17:24.512Z] =================================================================================================================== 00:20:00.309 [2024-11-20T07:17:24.512Z] Total : 9940.25 38.83 0.00 0.00 0.00 0.00 0.00 00:20:00.309 00:20:01.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:01.241 Nvme0n1 : 9.00 9790.22 38.24 0.00 0.00 0.00 0.00 0.00 00:20:01.241 [2024-11-20T07:17:25.444Z] =================================================================================================================== 00:20:01.241 [2024-11-20T07:17:25.444Z] Total : 9790.22 38.24 0.00 0.00 0.00 0.00 0.00 00:20:01.241 00:20:02.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:02.617 Nvme0n1 : 10.00 9578.30 37.42 0.00 0.00 0.00 0.00 0.00 00:20:02.617 [2024-11-20T07:17:26.820Z] =================================================================================================================== 00:20:02.617 [2024-11-20T07:17:26.820Z] Total : 9578.30 37.42 0.00 0.00 0.00 0.00 0.00 00:20:02.617 00:20:02.617 00:20:02.617 Latency(us) 00:20:02.617 [2024-11-20T07:17:26.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:02.617 Nvme0n1 : 10.01 9586.04 37.45 0.00 0.00 13349.99 4713.55 709805.29 00:20:02.617 [2024-11-20T07:17:26.820Z] =================================================================================================================== 00:20:02.617 [2024-11-20T07:17:26.820Z] Total : 9586.04 37.45 0.00 0.00 13349.99 4713.55 709805.29 00:20:02.617 { 00:20:02.617 "results": [ 00:20:02.617 { 00:20:02.617 "job": "Nvme0n1", 00:20:02.617 "core_mask": "0x2", 00:20:02.617 "workload": "randwrite", 00:20:02.617 "status": "finished", 00:20:02.617 "queue_depth": 128, 00:20:02.617 "io_size": 4096, 00:20:02.617 "runtime": 10.005278, 00:20:02.617 "iops": 9586.040487830523, 00:20:02.617 "mibps": 37.44547065558798, 00:20:02.617 "io_failed": 0, 00:20:02.617 "io_timeout": 0, 00:20:02.617 "avg_latency_us": 13349.994411678133, 00:20:02.617 "min_latency_us": 4713.550769230769, 00:20:02.617 "max_latency_us": 709805.2923076923 00:20:02.617 } 00:20:02.617 ], 00:20:02.617 "core_count": 1 00:20:02.617 } 00:20:02.617 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 62509 00:20:02.617 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 62509 ']' 00:20:02.617 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 62509 00:20:02.617 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:20:02.617 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.617 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62509 00:20:02.617 killing process with pid 62509 00:20:02.617 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.617 00:20:02.617 Latency(us) 00:20:02.617 [2024-11-20T07:17:26.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.617 [2024-11-20T07:17:26.820Z] =================================================================================================================== 00:20:02.617 [2024-11-20T07:17:26.820Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.617 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:02.617 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:02.617 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62509' 00:20:02.617 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 62509 00:20:02.617 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 62509 00:20:02.617 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:02.876 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:03.134 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:20:03.134 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:20:03.134 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 62170 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 62170 00:20:03.135 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 62170 Killed "${NVMF_APP[@]}" "$@" 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:03.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=62665 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 62665 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 62665 ']' 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.135 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:03.393 [2024-11-20 07:17:27.354026] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:03.393 [2024-11-20 07:17:27.354080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.393 [2024-11-20 07:17:27.497436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.393 [2024-11-20 07:17:27.531587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.393 [2024-11-20 07:17:27.531787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.393 [2024-11-20 07:17:27.531854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.393 [2024-11-20 07:17:27.531881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.393 [2024-11-20 07:17:27.531896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.393 [2024-11-20 07:17:27.532167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.393 [2024-11-20 07:17:27.561867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:04.328 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.328 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:20:04.328 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:04.328 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.328 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:04.328 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.328 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:04.328 [2024-11-20 07:17:28.455538] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:04.328 [2024-11-20 07:17:28.456171] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:04.328 [2024-11-20 07:17:28.456355] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:04.328 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:20:04.328 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6100ae61-45ee-455f-98b1-551f13535b02 00:20:04.328 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6100ae61-45ee-455f-98b1-551f13535b02 00:20:04.328 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:04.328 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:20:04.329 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:04.329 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:04.329 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:04.587 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6100ae61-45ee-455f-98b1-551f13535b02 -t 2000 00:20:04.844 [ 00:20:04.844 { 00:20:04.844 "name": "6100ae61-45ee-455f-98b1-551f13535b02", 00:20:04.844 "aliases": [ 00:20:04.844 "lvs/lvol" 00:20:04.844 ], 00:20:04.844 "product_name": "Logical Volume", 00:20:04.844 "block_size": 4096, 00:20:04.844 "num_blocks": 38912, 00:20:04.844 "uuid": "6100ae61-45ee-455f-98b1-551f13535b02", 00:20:04.844 "assigned_rate_limits": { 00:20:04.844 "rw_ios_per_sec": 0, 00:20:04.845 "rw_mbytes_per_sec": 0, 00:20:04.845 "r_mbytes_per_sec": 0, 00:20:04.845 "w_mbytes_per_sec": 0 00:20:04.845 }, 00:20:04.845 "claimed": false, 00:20:04.845 "zoned": false, 00:20:04.845 "supported_io_types": { 00:20:04.845 "read": true, 00:20:04.845 "write": true, 00:20:04.845 "unmap": true, 00:20:04.845 "flush": false, 00:20:04.845 "reset": true, 00:20:04.845 "nvme_admin": false, 00:20:04.845 "nvme_io": false, 00:20:04.845 "nvme_io_md": false, 00:20:04.845 "write_zeroes": true, 00:20:04.845 "zcopy": false, 00:20:04.845 "get_zone_info": false, 00:20:04.845 "zone_management": false, 00:20:04.845 "zone_append": false, 00:20:04.845 "compare": false, 00:20:04.845 "compare_and_write": false, 00:20:04.845 "abort": false, 00:20:04.845 "seek_hole": true, 00:20:04.845 "seek_data": true, 00:20:04.845 "copy": false, 00:20:04.845 "nvme_iov_md": false 00:20:04.845 }, 00:20:04.845 "driver_specific": { 00:20:04.845 "lvol": { 00:20:04.845 "lvol_store_uuid": "5f1636ab-6cf6-4d7d-ace9-238ac748bf60", 00:20:04.845 "base_bdev": "aio_bdev", 00:20:04.845 "thin_provision": false, 00:20:04.845 "num_allocated_clusters": 38, 00:20:04.845 "snapshot": false, 00:20:04.845 "clone": false, 00:20:04.845 "esnap_clone": false 00:20:04.845 } 00:20:04.845 } 00:20:04.845 } 00:20:04.845 ] 00:20:04.845 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:20:04.845 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:20:04.845 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:20:05.124 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:20:05.124 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:20:05.124 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:20:05.124 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:20:05.124 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:05.387 [2024-11-20 07:17:29.485456] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:05.387 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:20:05.387 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:20:05.387 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:20:05.387 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:05.387 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.387 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:05.387 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.387 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:05.387 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.387 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:05.387 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:05.387 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:20:05.644 request: 00:20:05.644 { 00:20:05.644 "uuid": "5f1636ab-6cf6-4d7d-ace9-238ac748bf60", 00:20:05.644 "method": "bdev_lvol_get_lvstores", 00:20:05.644 "req_id": 1 00:20:05.644 } 00:20:05.644 Got JSON-RPC error response 00:20:05.644 response: 00:20:05.644 { 00:20:05.644 "code": -19, 00:20:05.644 "message": "No such device" 00:20:05.644 } 00:20:05.644 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:20:05.644 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:05.644 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:05.644 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:05.644 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:05.902 aio_bdev 00:20:05.902 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6100ae61-45ee-455f-98b1-551f13535b02 00:20:05.902 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6100ae61-45ee-455f-98b1-551f13535b02 00:20:05.902 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:05.902 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:20:05.902 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:05.902 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:05.902 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:06.160 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6100ae61-45ee-455f-98b1-551f13535b02 -t 2000 00:20:06.160 [ 00:20:06.160 { 00:20:06.160 "name": "6100ae61-45ee-455f-98b1-551f13535b02", 00:20:06.160 "aliases": [ 00:20:06.160 "lvs/lvol" 00:20:06.160 ], 00:20:06.160 "product_name": "Logical Volume", 00:20:06.160 "block_size": 4096, 00:20:06.160 "num_blocks": 38912, 00:20:06.160 "uuid": "6100ae61-45ee-455f-98b1-551f13535b02", 00:20:06.160 "assigned_rate_limits": { 00:20:06.160 "rw_ios_per_sec": 0, 00:20:06.160 "rw_mbytes_per_sec": 0, 00:20:06.160 "r_mbytes_per_sec": 0, 00:20:06.160 "w_mbytes_per_sec": 0 00:20:06.160 }, 00:20:06.160 "claimed": false, 00:20:06.160 "zoned": false, 00:20:06.160 "supported_io_types": { 00:20:06.160 "read": true, 00:20:06.160 "write": true, 00:20:06.160 "unmap": true, 00:20:06.160 "flush": false, 00:20:06.160 "reset": true, 00:20:06.160 "nvme_admin": false, 00:20:06.160 "nvme_io": false, 00:20:06.160 "nvme_io_md": false, 00:20:06.160 "write_zeroes": true, 00:20:06.160 "zcopy": false, 00:20:06.160 "get_zone_info": false, 00:20:06.160 "zone_management": false, 00:20:06.160 "zone_append": false, 00:20:06.160 "compare": false, 00:20:06.160 "compare_and_write": false, 00:20:06.160 "abort": false, 00:20:06.160 "seek_hole": true, 00:20:06.160 "seek_data": true, 00:20:06.160 "copy": false, 00:20:06.160 "nvme_iov_md": false 00:20:06.160 }, 00:20:06.160 "driver_specific": { 00:20:06.160 "lvol": { 00:20:06.160 "lvol_store_uuid": "5f1636ab-6cf6-4d7d-ace9-238ac748bf60", 00:20:06.160 "base_bdev": "aio_bdev", 00:20:06.160 "thin_provision": false, 00:20:06.160 "num_allocated_clusters": 38, 00:20:06.160 "snapshot": false, 00:20:06.160 "clone": false, 00:20:06.160 "esnap_clone": false 00:20:06.160 } 00:20:06.160 } 00:20:06.160 } 00:20:06.160 ] 00:20:06.160 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:20:06.160 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:20:06.160 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:20:06.419 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:20:06.420 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:20:06.420 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:20:06.677 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:20:06.678 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6100ae61-45ee-455f-98b1-551f13535b02 00:20:06.934 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f1636ab-6cf6-4d7d-ace9-238ac748bf60 00:20:07.190 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:07.446 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:07.704 ************************************ 00:20:07.704 END TEST lvs_grow_dirty 00:20:07.704 ************************************ 00:20:07.704 00:20:07.704 real 0m18.881s 00:20:07.704 user 0m39.955s 00:20:07.704 sys 0m5.633s 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:07.704 nvmf_trace.0 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:07.704 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:08.638 rmmod nvme_tcp 00:20:08.638 rmmod nvme_fabrics 00:20:08.638 rmmod nvme_keyring 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 62665 ']' 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 62665 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 62665 ']' 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 62665 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62665 00:20:08.638 killing process with pid 62665 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62665' 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 62665 00:20:08.638 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 62665 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:20:08.896 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # continue 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # continue 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:20:08.896 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:20:09.154 ************************************ 00:20:09.154 END TEST nvmf_lvs_grow 00:20:09.154 ************************************ 00:20:09.154 00:20:09.154 real 0m38.357s 00:20:09.154 user 1m1.636s 00:20:09.154 sys 0m8.886s 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:20:09.154 ************************************ 00:20:09.154 START TEST nvmf_bdev_io_wait 00:20:09.154 ************************************ 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:09.154 * Looking for test storage... 00:20:09.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:09.154 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:09.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.155 --rc genhtml_branch_coverage=1 00:20:09.155 --rc genhtml_function_coverage=1 00:20:09.155 --rc genhtml_legend=1 00:20:09.155 --rc geninfo_all_blocks=1 00:20:09.155 --rc geninfo_unexecuted_blocks=1 00:20:09.155 00:20:09.155 ' 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:09.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.155 --rc genhtml_branch_coverage=1 00:20:09.155 --rc genhtml_function_coverage=1 00:20:09.155 --rc genhtml_legend=1 00:20:09.155 --rc geninfo_all_blocks=1 00:20:09.155 --rc geninfo_unexecuted_blocks=1 00:20:09.155 00:20:09.155 ' 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:09.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.155 --rc genhtml_branch_coverage=1 00:20:09.155 --rc genhtml_function_coverage=1 00:20:09.155 --rc genhtml_legend=1 00:20:09.155 --rc geninfo_all_blocks=1 00:20:09.155 --rc geninfo_unexecuted_blocks=1 00:20:09.155 00:20:09.155 ' 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:09.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.155 --rc genhtml_branch_coverage=1 00:20:09.155 --rc genhtml_function_coverage=1 00:20:09.155 --rc genhtml_legend=1 00:20:09.155 --rc geninfo_all_blocks=1 00:20:09.155 --rc geninfo_unexecuted_blocks=1 00:20:09.155 00:20:09.155 ' 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:09.155 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@280 -- # nvmf_veth_init 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@223 -- # create_target_ns 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # create_main_bridge 00:20:09.155 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@105 -- # delete_main_bridge 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up initiator0 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up target0 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0 up 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up target0_br 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns target0 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:20:09.156 10.0.0.1 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:09.156 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:09.422 10.0.0.2 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up initiator0 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:09.422 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up target0_br 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:09.423 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up initiator1 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:09.424 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:20:09.425 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:20:09.425 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:20:09.425 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up target1 00:20:09.425 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:20:09.425 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.425 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:20:09.425 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1 up 00:20:09.425 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up target1_br 00:20:09.425 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:09.425 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.425 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:09.425 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns target1 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772163 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:20:09.426 10.0.0.3 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772164 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:20:09.426 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:09.427 10.0.0.4 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up initiator1 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:09.427 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up target1_br 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:09.428 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator0 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:09.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:20:09.429 00:20:09.429 --- 10.0.0.1 ping statistics --- 00:20:09.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.429 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target0 00:20:09.429 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target0 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:09.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:20:09.430 00:20:09.430 --- 10.0.0.2 ping statistics --- 00:20:09.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.430 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:20:09.430 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator1 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.431 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:09.432 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:09.432 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:09.432 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:20:09.432 00:20:09.432 --- 10.0.0.3 ping statistics --- 00:20:09.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.432 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:09.432 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:09.432 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:09.432 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:09.432 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.432 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.432 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:09.432 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:20:09.432 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:09.432 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:09.432 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target1 00:20:09.432 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target1 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:20:09.433 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:09.433 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:20:09.433 00:20:09.433 --- 10.0.0.4 ping statistics --- 00:20:09.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.433 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # return 0 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:09.433 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator0 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.434 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator1 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target0 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target0 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:09.435 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target1 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target1 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:09.436 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:20:09.436 ' 00:20:09.437 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.437 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:09.437 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:09.437 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.437 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:09.437 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=63036 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 63036 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 63036 ']' 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.702 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:09.702 [2024-11-20 07:17:33.667999] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:09.702 [2024-11-20 07:17:33.668063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.702 [2024-11-20 07:17:33.809473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.702 [2024-11-20 07:17:33.848071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.702 [2024-11-20 07:17:33.848387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.702 [2024-11-20 07:17:33.848517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.702 [2024-11-20 07:17:33.848733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.702 [2024-11-20 07:17:33.848792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.702 [2024-11-20 07:17:33.849629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.702 [2024-11-20 07:17:33.849717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.702 [2024-11-20 07:17:33.849786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.702 [2024-11-20 07:17:33.849787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:10.635 [2024-11-20 07:17:34.619968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:10.635 [2024-11-20 07:17:34.630565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:10.635 Malloc0 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:10.635 [2024-11-20 07:17:34.672932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63071 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63073 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63075 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:10.635 { 00:20:10.635 "params": { 00:20:10.635 "name": "Nvme$subsystem", 00:20:10.635 "trtype": "$TEST_TRANSPORT", 00:20:10.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.635 "adrfam": "ipv4", 00:20:10.635 "trsvcid": "$NVMF_PORT", 00:20:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.635 "hdgst": ${hdgst:-false}, 00:20:10.635 "ddgst": ${ddgst:-false} 00:20:10.635 }, 00:20:10.635 "method": "bdev_nvme_attach_controller" 00:20:10.635 } 00:20:10.635 EOF 00:20:10.635 )") 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:10.635 { 00:20:10.635 "params": { 00:20:10.635 "name": "Nvme$subsystem", 00:20:10.635 "trtype": "$TEST_TRANSPORT", 00:20:10.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.635 "adrfam": "ipv4", 00:20:10.635 "trsvcid": "$NVMF_PORT", 00:20:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.635 "hdgst": ${hdgst:-false}, 00:20:10.635 "ddgst": ${ddgst:-false} 00:20:10.635 }, 00:20:10.635 "method": "bdev_nvme_attach_controller" 00:20:10.635 } 00:20:10.635 EOF 00:20:10.635 )") 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:20:10.635 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:10.636 { 00:20:10.636 "params": { 00:20:10.636 "name": "Nvme$subsystem", 00:20:10.636 "trtype": "$TEST_TRANSPORT", 00:20:10.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.636 "adrfam": "ipv4", 00:20:10.636 "trsvcid": "$NVMF_PORT", 00:20:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.636 "hdgst": ${hdgst:-false}, 00:20:10.636 "ddgst": ${ddgst:-false} 00:20:10.636 }, 00:20:10.636 "method": "bdev_nvme_attach_controller" 00:20:10.636 } 00:20:10.636 EOF 00:20:10.636 )") 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:10.636 { 00:20:10.636 "params": { 00:20:10.636 "name": "Nvme$subsystem", 00:20:10.636 "trtype": "$TEST_TRANSPORT", 00:20:10.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.636 "adrfam": "ipv4", 00:20:10.636 "trsvcid": "$NVMF_PORT", 00:20:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.636 "hdgst": ${hdgst:-false}, 00:20:10.636 "ddgst": ${ddgst:-false} 00:20:10.636 }, 00:20:10.636 "method": "bdev_nvme_attach_controller" 00:20:10.636 } 00:20:10.636 EOF 00:20:10.636 )") 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63077 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:20:10.636 "params": { 00:20:10.636 "name": "Nvme1", 00:20:10.636 "trtype": "tcp", 00:20:10.636 "traddr": "10.0.0.2", 00:20:10.636 "adrfam": "ipv4", 00:20:10.636 "trsvcid": "4420", 00:20:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.636 "hdgst": false, 00:20:10.636 "ddgst": false 00:20:10.636 }, 00:20:10.636 "method": "bdev_nvme_attach_controller" 00:20:10.636 }' 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:20:10.636 "params": { 00:20:10.636 "name": "Nvme1", 00:20:10.636 "trtype": "tcp", 00:20:10.636 "traddr": "10.0.0.2", 00:20:10.636 "adrfam": "ipv4", 00:20:10.636 "trsvcid": "4420", 00:20:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.636 "hdgst": false, 00:20:10.636 "ddgst": false 00:20:10.636 }, 00:20:10.636 "method": "bdev_nvme_attach_controller" 00:20:10.636 }' 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:20:10.636 "params": { 00:20:10.636 "name": "Nvme1", 00:20:10.636 "trtype": "tcp", 00:20:10.636 "traddr": "10.0.0.2", 00:20:10.636 "adrfam": "ipv4", 00:20:10.636 "trsvcid": "4420", 00:20:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.636 "hdgst": false, 00:20:10.636 "ddgst": false 00:20:10.636 }, 00:20:10.636 "method": "bdev_nvme_attach_controller" 00:20:10.636 }' 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:20:10.636 "params": { 00:20:10.636 "name": "Nvme1", 00:20:10.636 "trtype": "tcp", 00:20:10.636 "traddr": "10.0.0.2", 00:20:10.636 "adrfam": "ipv4", 00:20:10.636 "trsvcid": "4420", 00:20:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.636 "hdgst": false, 00:20:10.636 "ddgst": false 00:20:10.636 }, 00:20:10.636 "method": "bdev_nvme_attach_controller" 00:20:10.636 }' 00:20:10.636 [2024-11-20 07:17:34.715820] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:10.636 [2024-11-20 07:17:34.715999] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:10.636 [2024-11-20 07:17:34.716057] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:20:10.636 [2024-11-20 07:17:34.716435] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:20:10.636 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63071 00:20:10.636 [2024-11-20 07:17:34.730958] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:10.636 [2024-11-20 07:17:34.731015] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:20:10.636 [2024-11-20 07:17:34.735502] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:10.636 [2024-11-20 07:17:34.735554] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:10.893 [2024-11-20 07:17:34.891661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.893 [2024-11-20 07:17:34.920922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:10.893 [2024-11-20 07:17:34.930923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.893 [2024-11-20 07:17:34.933388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:10.893 [2024-11-20 07:17:34.960011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:20:10.893 [2024-11-20 07:17:34.970588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.893 [2024-11-20 07:17:34.972676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:10.893 [2024-11-20 07:17:34.998681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:10.893 [2024-11-20 07:17:35.011215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:10.893 Running I/O for 1 seconds... 00:20:10.893 [2024-11-20 07:17:35.042939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.893 Running I/O for 1 seconds... 00:20:10.893 [2024-11-20 07:17:35.085640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:11.150 [2024-11-20 07:17:35.104657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:11.150 Running I/O for 1 seconds... 00:20:11.150 Running I/O for 1 seconds... 00:20:12.082 183368.00 IOPS, 716.28 MiB/s 00:20:12.083 Latency(us) 00:20:12.083 [2024-11-20T07:17:36.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.083 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:20:12.083 Nvme1n1 : 1.00 182966.52 714.71 0.00 0.00 695.57 343.43 2192.94 00:20:12.083 [2024-11-20T07:17:36.286Z] =================================================================================================================== 00:20:12.083 [2024-11-20T07:17:36.286Z] Total : 182966.52 714.71 0.00 0.00 695.57 343.43 2192.94 00:20:12.083 15515.00 IOPS, 60.61 MiB/s 00:20:12.083 Latency(us) 00:20:12.083 [2024-11-20T07:17:36.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.083 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:20:12.083 Nvme1n1 : 1.01 15604.10 60.95 0.00 0.00 8184.87 3352.42 20971.52 00:20:12.083 [2024-11-20T07:17:36.286Z] =================================================================================================================== 00:20:12.083 [2024-11-20T07:17:36.286Z] Total : 15604.10 60.95 0.00 0.00 8184.87 3352.42 20971.52 00:20:12.083 10399.00 IOPS, 40.62 MiB/s 00:20:12.083 Latency(us) 00:20:12.083 [2024-11-20T07:17:36.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.083 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:20:12.083 Nvme1n1 : 1.01 10449.47 40.82 0.00 0.00 12200.75 6150.30 22887.19 00:20:12.083 [2024-11-20T07:17:36.286Z] =================================================================================================================== 00:20:12.083 [2024-11-20T07:17:36.286Z] Total : 10449.47 40.82 0.00 0.00 12200.75 6150.30 22887.19 00:20:12.083 11358.00 IOPS, 44.37 MiB/s 00:20:12.083 Latency(us) 00:20:12.083 [2024-11-20T07:17:36.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.083 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:20:12.083 Nvme1n1 : 1.01 11430.08 44.65 0.00 0.00 11161.91 5217.67 24197.91 00:20:12.083 [2024-11-20T07:17:36.286Z] =================================================================================================================== 00:20:12.083 [2024-11-20T07:17:36.286Z] Total : 11430.08 44.65 0.00 0.00 11161.91 5217.67 24197.91 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63073 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63075 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63077 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:12.340 rmmod nvme_tcp 00:20:12.340 rmmod nvme_fabrics 00:20:12.340 rmmod nvme_keyring 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 63036 ']' 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 63036 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 63036 ']' 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 63036 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63036 00:20:12.340 killing process with pid 63036 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63036' 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 63036 00:20:12.340 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 63036 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:12.598 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # continue 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # continue 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:20:12.599 00:20:12.599 real 0m3.587s 00:20:12.599 user 0m15.333s 00:20:12.599 sys 0m1.654s 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:12.599 ************************************ 00:20:12.599 END TEST nvmf_bdev_io_wait 00:20:12.599 ************************************ 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:20:12.599 ************************************ 00:20:12.599 START TEST nvmf_queue_depth 00:20:12.599 ************************************ 00:20:12.599 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:12.858 * Looking for test storage... 00:20:12.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:20:12.858 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:12.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.859 --rc genhtml_branch_coverage=1 00:20:12.859 --rc genhtml_function_coverage=1 00:20:12.859 --rc genhtml_legend=1 00:20:12.859 --rc geninfo_all_blocks=1 00:20:12.859 --rc geninfo_unexecuted_blocks=1 00:20:12.859 00:20:12.859 ' 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:12.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.859 --rc genhtml_branch_coverage=1 00:20:12.859 --rc genhtml_function_coverage=1 00:20:12.859 --rc genhtml_legend=1 00:20:12.859 --rc geninfo_all_blocks=1 00:20:12.859 --rc geninfo_unexecuted_blocks=1 00:20:12.859 00:20:12.859 ' 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:12.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.859 --rc genhtml_branch_coverage=1 00:20:12.859 --rc genhtml_function_coverage=1 00:20:12.859 --rc genhtml_legend=1 00:20:12.859 --rc geninfo_all_blocks=1 00:20:12.859 --rc geninfo_unexecuted_blocks=1 00:20:12.859 00:20:12.859 ' 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:12.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.859 --rc genhtml_branch_coverage=1 00:20:12.859 --rc genhtml_function_coverage=1 00:20:12.859 --rc genhtml_legend=1 00:20:12.859 --rc geninfo_all_blocks=1 00:20:12.859 --rc geninfo_unexecuted_blocks=1 00:20:12.859 00:20:12.859 ' 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:12.859 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:12.859 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@280 -- # nvmf_veth_init 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@223 -- # create_target_ns 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # create_main_bridge 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@105 -- # delete_main_bridge 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up initiator0 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up target0 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0 up 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up target0_br 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns target0 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:12.860 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:12.861 10.0.0.1 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:12.861 10.0.0.2 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up initiator0 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up target0_br 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:20:12.861 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up initiator1 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up target1 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1 up 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up target1_br 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns target1 00:20:13.120 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772163 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:20:13.121 10.0.0.3 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772164 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:13.121 10.0.0.4 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up initiator1 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up target1_br 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator0 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:13.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:20:13.121 00:20:13.121 --- 10.0.0.1 ping statistics --- 00:20:13.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.121 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target0 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target0 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:13.121 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:13.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:20:13.122 00:20:13.122 --- 10.0.0.2 ping statistics --- 00:20:13.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.122 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:13.122 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.122 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:13.122 00:20:13.122 --- 10.0.0.3 ping statistics --- 00:20:13.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.122 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:20:13.122 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:13.122 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:20:13.122 00:20:13.122 --- 10.0.0.4 ping statistics --- 00:20:13.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.122 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # return 0 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator0 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:13.122 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target0 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target0 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target1 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target1 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:20:13.123 ' 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=63343 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 63343 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 63343 ']' 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.123 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:13.380 [2024-11-20 07:17:37.331640] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:13.380 [2024-11-20 07:17:37.331701] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.380 [2024-11-20 07:17:37.475597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.380 [2024-11-20 07:17:37.511829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.381 [2024-11-20 07:17:37.511876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.381 [2024-11-20 07:17:37.511883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.381 [2024-11-20 07:17:37.511888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.381 [2024-11-20 07:17:37.511892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.381 [2024-11-20 07:17:37.512149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.381 [2024-11-20 07:17:37.543581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:14.313 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.313 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:20:14.313 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:14.313 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.313 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:14.313 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.313 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.313 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.313 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:14.313 [2024-11-20 07:17:38.248688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.313 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.313 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:14.314 Malloc0 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:14.314 [2024-11-20 07:17:38.287662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=63375 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 63375 /var/tmp/bdevperf.sock 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 63375 ']' 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:20:14.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.314 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:14.314 [2024-11-20 07:17:38.326140] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:14.314 [2024-11-20 07:17:38.326205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63375 ] 00:20:14.314 [2024-11-20 07:17:38.465810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.314 [2024-11-20 07:17:38.500996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.571 [2024-11-20 07:17:38.531504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:15.140 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.140 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:20:15.140 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:15.140 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.140 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:15.140 NVMe0n1 00:20:15.140 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.140 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.401 Running I/O for 10 seconds... 00:20:17.265 7168.00 IOPS, 28.00 MiB/s [2024-11-20T07:17:42.429Z] 8192.00 IOPS, 32.00 MiB/s [2024-11-20T07:17:43.803Z] 8854.33 IOPS, 34.59 MiB/s [2024-11-20T07:17:44.369Z] 9230.00 IOPS, 36.05 MiB/s [2024-11-20T07:17:45.739Z] 9569.20 IOPS, 37.38 MiB/s [2024-11-20T07:17:46.683Z] 9764.00 IOPS, 38.14 MiB/s [2024-11-20T07:17:47.663Z] 9976.71 IOPS, 38.97 MiB/s [2024-11-20T07:17:48.597Z] 10144.88 IOPS, 39.63 MiB/s [2024-11-20T07:17:49.529Z] 10282.56 IOPS, 40.17 MiB/s [2024-11-20T07:17:49.529Z] 10396.70 IOPS, 40.61 MiB/s 00:20:25.326 Latency(us) 00:20:25.326 [2024-11-20T07:17:49.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.326 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:20:25.326 Verification LBA range: start 0x0 length 0x4000 00:20:25.326 NVMe0n1 : 10.06 10435.20 40.76 0.00 0.00 97745.61 15123.69 87515.77 00:20:25.326 [2024-11-20T07:17:49.529Z] =================================================================================================================== 00:20:25.326 [2024-11-20T07:17:49.529Z] Total : 10435.20 40.76 0.00 0.00 97745.61 15123.69 87515.77 00:20:25.326 { 00:20:25.326 "results": [ 00:20:25.326 { 00:20:25.326 "job": "NVMe0n1", 00:20:25.326 "core_mask": "0x1", 00:20:25.326 "workload": "verify", 00:20:25.326 "status": "finished", 00:20:25.326 "verify_range": { 00:20:25.326 "start": 0, 00:20:25.326 "length": 16384 00:20:25.326 }, 00:20:25.326 "queue_depth": 1024, 00:20:25.326 "io_size": 4096, 00:20:25.326 "runtime": 10.057404, 00:20:25.326 "iops": 10435.197790602824, 00:20:25.326 "mibps": 40.76249136954228, 00:20:25.326 "io_failed": 0, 00:20:25.326 "io_timeout": 0, 00:20:25.326 "avg_latency_us": 97745.61371857782, 00:20:25.326 "min_latency_us": 15123.692307692309, 00:20:25.326 "max_latency_us": 87515.76615384615 00:20:25.326 } 00:20:25.326 ], 00:20:25.326 "core_count": 1 00:20:25.326 } 00:20:25.326 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 63375 00:20:25.326 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 63375 ']' 00:20:25.326 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 63375 00:20:25.326 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:20:25.326 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.326 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63375 00:20:25.326 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.326 killing process with pid 63375 00:20:25.326 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.326 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63375' 00:20:25.326 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 63375 00:20:25.326 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.326 00:20:25.326 Latency(us) 00:20:25.326 [2024-11-20T07:17:49.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.326 [2024-11-20T07:17:49.529Z] =================================================================================================================== 00:20:25.326 [2024-11-20T07:17:49.529Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.326 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 63375 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:25.583 rmmod nvme_tcp 00:20:25.583 rmmod nvme_fabrics 00:20:25.583 rmmod nvme_keyring 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 63343 ']' 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 63343 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 63343 ']' 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 63343 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63343 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:25.583 killing process with pid 63343 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63343' 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 63343 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 63343 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:25.583 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # continue 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # continue 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:20:25.840 00:20:25.840 real 0m13.175s 00:20:25.840 user 0m23.152s 00:20:25.840 sys 0m1.682s 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:25.840 ************************************ 00:20:25.840 END TEST nvmf_queue_depth 00:20:25.840 ************************************ 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:20:25.840 ************************************ 00:20:25.840 START TEST nvmf_target_multipath 00:20:25.840 ************************************ 00:20:25.840 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:25.840 * Looking for test storage... 00:20:25.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:25.840 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.099 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:26.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.100 --rc genhtml_branch_coverage=1 00:20:26.100 --rc genhtml_function_coverage=1 00:20:26.100 --rc genhtml_legend=1 00:20:26.100 --rc geninfo_all_blocks=1 00:20:26.100 --rc geninfo_unexecuted_blocks=1 00:20:26.100 00:20:26.100 ' 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:26.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.100 --rc genhtml_branch_coverage=1 00:20:26.100 --rc genhtml_function_coverage=1 00:20:26.100 --rc genhtml_legend=1 00:20:26.100 --rc geninfo_all_blocks=1 00:20:26.100 --rc geninfo_unexecuted_blocks=1 00:20:26.100 00:20:26.100 ' 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:26.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.100 --rc genhtml_branch_coverage=1 00:20:26.100 --rc genhtml_function_coverage=1 00:20:26.100 --rc genhtml_legend=1 00:20:26.100 --rc geninfo_all_blocks=1 00:20:26.100 --rc geninfo_unexecuted_blocks=1 00:20:26.100 00:20:26.100 ' 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:26.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.100 --rc genhtml_branch_coverage=1 00:20:26.100 --rc genhtml_function_coverage=1 00:20:26.100 --rc genhtml_legend=1 00:20:26.100 --rc geninfo_all_blocks=1 00:20:26.100 --rc geninfo_unexecuted_blocks=1 00:20:26.100 00:20:26.100 ' 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:26.100 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:20:26.100 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@280 -- # nvmf_veth_init 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@223 -- # create_target_ns 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@224 -- # create_main_bridge 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@105 -- # delete_main_bridge 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up initiator0 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up target0 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0 up 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up target0_br 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns target0 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:20:26.101 10.0.0.1 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:26.101 10.0.0.2 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up initiator0 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:26.101 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up target0_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up initiator1 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up target1 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1 up 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up target1_br 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns target1 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772163 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:20:26.102 10.0.0.3 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772164 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:26.102 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:20:26.361 10.0.0.4 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up initiator1 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up target1_br 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:26.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:20:26.361 00:20:26.361 --- 10.0.0.1 ping statistics --- 00:20:26.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.361 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target0 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:26.361 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:26.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.021 ms 00:20:26.362 00:20:26.362 --- 10.0.0.2 ping statistics --- 00:20:26.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.362 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:26.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:26.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:20:26.362 00:20:26.362 --- 10.0.0.3 ping statistics --- 00:20:26.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.362 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:20:26.362 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:26.362 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:20:26.362 00:20:26.362 --- 10.0.0.4 ping statistics --- 00:20:26.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.362 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # return 0 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target0 00:20:26.362 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target1 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:20:26.363 ' 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # nvmfpid=63742 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # waitforlisten 63742 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 63742 ']' 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.363 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:26.363 [2024-11-20 07:17:50.466107] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:26.363 [2024-11-20 07:17:50.466158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.620 [2024-11-20 07:17:50.602494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.620 [2024-11-20 07:17:50.638981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.620 [2024-11-20 07:17:50.639023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.620 [2024-11-20 07:17:50.639029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.620 [2024-11-20 07:17:50.639034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.620 [2024-11-20 07:17:50.639038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.620 [2024-11-20 07:17:50.639760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.620 [2024-11-20 07:17:50.639837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.620 [2024-11-20 07:17:50.640677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.620 [2024-11-20 07:17:50.640688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.620 [2024-11-20 07:17:50.671072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:27.187 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.187 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:27.187 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:27.187 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.187 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:27.187 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.187 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:27.445 [2024-11-20 07:17:51.511778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.445 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:27.702 Malloc0 00:20:27.702 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:20:27.961 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:27.961 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.218 [2024-11-20 07:17:52.296325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.218 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:20:28.476 [2024-11-20 07:17:52.496538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:20:28.477 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid=6878406f-1821-4d15-bee4-f9cf994eb227 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:20:28.477 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid=6878406f-1821-4d15-bee4-f9cf994eb227 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:20:28.734 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:20:28.734 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:20:28.734 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:28.734 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:28.734 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:20:30.631 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=63826 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:20:30.632 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:20:30.632 [global] 00:20:30.632 thread=1 00:20:30.632 invalidate=1 00:20:30.632 rw=randrw 00:20:30.632 time_based=1 00:20:30.632 runtime=6 00:20:30.632 ioengine=libaio 00:20:30.632 direct=1 00:20:30.632 bs=4096 00:20:30.632 iodepth=128 00:20:30.632 norandommap=0 00:20:30.632 numjobs=1 00:20:30.632 00:20:30.632 verify_dump=1 00:20:30.632 verify_backlog=512 00:20:30.632 verify_state_save=0 00:20:30.632 do_verify=1 00:20:30.632 verify=crc32c-intel 00:20:30.632 [job0] 00:20:30.632 filename=/dev/nvme0n1 00:20:30.632 Could not set queue depth (nvme0n1) 00:20:30.890 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:30.890 fio-3.35 00:20:30.890 Starting 1 thread 00:20:31.822 07:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:31.823 07:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:32.081 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:32.340 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:32.599 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 63826 00:20:37.860 00:20:37.860 job0: (groupid=0, jobs=1): err= 0: pid=63847: Wed Nov 20 07:18:01 2024 00:20:37.860 read: IOPS=14.7k, BW=57.5MiB/s (60.3MB/s)(345MiB/6005msec) 00:20:37.860 slat (usec): min=3, max=11717, avg=40.86, stdev=172.69 00:20:37.860 clat (usec): min=697, max=17448, avg=5928.84, stdev=1166.55 00:20:37.860 lat (usec): min=710, max=17454, avg=5969.70, stdev=1170.33 00:20:37.860 clat percentiles (usec): 00:20:37.860 | 1.00th=[ 3097], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5342], 00:20:37.860 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5735], 60.00th=[ 5866], 00:20:37.860 | 70.00th=[ 6063], 80.00th=[ 6325], 90.00th=[ 7373], 95.00th=[ 8455], 00:20:37.860 | 99.00th=[ 9372], 99.50th=[10814], 99.90th=[13566], 99.95th=[13698], 00:20:37.860 | 99.99th=[16712] 00:20:37.860 bw ( KiB/s): min=18240, max=36416, per=51.17%, avg=30120.00, stdev=6837.75, samples=11 00:20:37.860 iops : min= 4560, max= 9104, avg=7530.00, stdev=1709.44, samples=11 00:20:37.860 write: IOPS=8632, BW=33.7MiB/s (35.4MB/s)(180MiB/5345msec); 0 zone resets 00:20:37.860 slat (usec): min=8, max=3137, avg=46.53, stdev=123.45 00:20:37.860 clat (usec): min=657, max=16803, avg=5153.15, stdev=1069.71 00:20:37.860 lat (usec): min=679, max=16821, avg=5199.68, stdev=1073.80 00:20:37.860 clat percentiles (usec): 00:20:37.860 | 1.00th=[ 2278], 5.00th=[ 3032], 10.00th=[ 3982], 20.00th=[ 4752], 00:20:37.860 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5342], 00:20:37.860 | 70.00th=[ 5473], 80.00th=[ 5604], 90.00th=[ 5932], 95.00th=[ 6783], 00:20:37.860 | 99.00th=[ 8225], 99.50th=[ 8979], 99.90th=[13173], 99.95th=[15795], 00:20:37.860 | 99.99th=[16581] 00:20:37.860 bw ( KiB/s): min=19032, max=35752, per=87.41%, avg=30183.27, stdev=6507.38, samples=11 00:20:37.860 iops : min= 4758, max= 8938, avg=7545.82, stdev=1626.84, samples=11 00:20:37.860 lat (usec) : 750=0.01%, 1000=0.01% 00:20:37.860 lat (msec) : 2=0.19%, 4=5.34%, 10=93.93%, 20=0.53% 00:20:37.860 cpu : usr=3.76%, sys=17.44%, ctx=7739, majf=0, minf=90 00:20:37.860 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:37.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:37.860 issued rwts: total=88376,46142,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:37.860 00:20:37.860 Run status group 0 (all jobs): 00:20:37.860 READ: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=345MiB (362MB), run=6005-6005msec 00:20:37.860 WRITE: bw=33.7MiB/s (35.4MB/s), 33.7MiB/s-33.7MiB/s (35.4MB/s-35.4MB/s), io=180MiB (189MB), run=5345-5345msec 00:20:37.860 00:20:37.860 Disk stats (read/write): 00:20:37.860 nvme0n1: ios=87252/45220, merge=0/0, ticks=498807/221115, in_queue=719922, util=98.47% 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:37.860 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:37.861 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:37.861 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:20:37.861 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=63927 00:20:37.861 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:20:37.861 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:20:37.861 [global] 00:20:37.861 thread=1 00:20:37.861 invalidate=1 00:20:37.861 rw=randrw 00:20:37.861 time_based=1 00:20:37.861 runtime=6 00:20:37.861 ioengine=libaio 00:20:37.861 direct=1 00:20:37.861 bs=4096 00:20:37.861 iodepth=128 00:20:37.861 norandommap=0 00:20:37.861 numjobs=1 00:20:37.861 00:20:37.861 verify_dump=1 00:20:37.861 verify_backlog=512 00:20:37.861 verify_state_save=0 00:20:37.861 do_verify=1 00:20:37.861 verify=crc32c-intel 00:20:37.861 [job0] 00:20:37.861 filename=/dev/nvme0n1 00:20:37.861 Could not set queue depth (nvme0n1) 00:20:37.861 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:37.861 fio-3.35 00:20:37.861 Starting 1 thread 00:20:38.425 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:38.682 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:38.940 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:38.940 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:39.198 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 63927 00:20:44.484 00:20:44.484 job0: (groupid=0, jobs=1): err= 0: pid=63954: Wed Nov 20 07:18:07 2024 00:20:44.484 read: IOPS=15.7k, BW=61.4MiB/s (64.3MB/s)(368MiB/6005msec) 00:20:44.484 slat (usec): min=3, max=6737, avg=33.03, stdev=154.11 00:20:44.484 clat (usec): min=157, max=17681, avg=5576.85, stdev=2092.96 00:20:44.484 lat (usec): min=163, max=17688, avg=5609.87, stdev=2099.84 00:20:44.484 clat percentiles (usec): 00:20:44.484 | 1.00th=[ 441], 5.00th=[ 824], 10.00th=[ 1876], 20.00th=[ 5014], 00:20:44.484 | 30.00th=[ 5407], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5866], 00:20:44.484 | 70.00th=[ 6194], 80.00th=[ 6718], 90.00th=[ 7898], 95.00th=[ 8717], 00:20:44.484 | 99.00th=[11076], 99.50th=[11469], 99.90th=[12387], 99.95th=[13566], 00:20:44.484 | 99.99th=[16581] 00:20:44.484 bw ( KiB/s): min=18192, max=48584, per=53.36%, avg=33530.91, stdev=9228.15, samples=11 00:20:44.484 iops : min= 4548, max=12146, avg=8382.73, stdev=2307.04, samples=11 00:20:44.484 write: IOPS=9523, BW=37.2MiB/s (39.0MB/s)(197MiB/5306msec); 0 zone resets 00:20:44.484 slat (usec): min=7, max=2981, avg=38.02, stdev=106.02 00:20:44.484 clat (usec): min=129, max=16539, avg=4679.88, stdev=1803.00 00:20:44.484 lat (usec): min=145, max=16558, avg=4717.90, stdev=1809.85 00:20:44.484 clat percentiles (usec): 00:20:44.484 | 1.00th=[ 388], 5.00th=[ 676], 10.00th=[ 1352], 20.00th=[ 3556], 00:20:44.484 | 30.00th=[ 4686], 40.00th=[ 4948], 50.00th=[ 5145], 60.00th=[ 5276], 00:20:44.484 | 70.00th=[ 5473], 80.00th=[ 5669], 90.00th=[ 6259], 95.00th=[ 6915], 00:20:44.484 | 99.00th=[ 8291], 99.50th=[10028], 99.90th=[12649], 99.95th=[15795], 00:20:44.484 | 99.99th=[16450] 00:20:44.484 bw ( KiB/s): min=18912, max=49184, per=87.98%, avg=33514.91, stdev=8888.27, samples=11 00:20:44.484 iops : min= 4728, max=12296, avg=8378.73, stdev=2222.07, samples=11 00:20:44.484 lat (usec) : 250=0.18%, 500=1.52%, 750=3.03%, 1000=2.49% 00:20:44.484 lat (msec) : 2=3.93%, 4=6.00%, 10=81.28%, 20=1.56% 00:20:44.484 cpu : usr=3.63%, sys=18.00%, ctx=10336, majf=0, minf=127 00:20:44.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:44.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:44.484 issued rwts: total=94328,50531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:44.484 00:20:44.484 Run status group 0 (all jobs): 00:20:44.484 READ: bw=61.4MiB/s (64.3MB/s), 61.4MiB/s-61.4MiB/s (64.3MB/s-64.3MB/s), io=368MiB (386MB), run=6005-6005msec 00:20:44.484 WRITE: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=197MiB (207MB), run=5306-5306msec 00:20:44.484 00:20:44.484 Disk stats (read/write): 00:20:44.484 nvme0n1: ios=93243/49587, merge=0/0, ticks=502342/220172, in_queue=722514, util=98.62% 00:20:44.484 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:44.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:44.484 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:44.484 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:20:44.484 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:44.484 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:44.484 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:44.484 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:44.484 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:20:44.484 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:44.484 rmmod nvme_tcp 00:20:44.484 rmmod nvme_fabrics 00:20:44.484 rmmod nvme_keyring 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n 63742 ']' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@337 -- # killprocess 63742 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 63742 ']' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 63742 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63742 00:20:44.484 killing process with pid 63742 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63742' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 63742 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 63742 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # continue 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # continue 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:20:44.484 ************************************ 00:20:44.484 END TEST nvmf_target_multipath 00:20:44.484 ************************************ 00:20:44.484 00:20:44.484 real 0m18.580s 00:20:44.484 user 1m10.094s 00:20:44.484 sys 0m7.336s 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:20:44.484 ************************************ 00:20:44.484 START TEST nvmf_zcopy 00:20:44.484 ************************************ 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:44.484 * Looking for test storage... 00:20:44.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:44.484 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:20:44.744 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:44.744 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.744 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.744 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:44.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.745 --rc genhtml_branch_coverage=1 00:20:44.745 --rc genhtml_function_coverage=1 00:20:44.745 --rc genhtml_legend=1 00:20:44.745 --rc geninfo_all_blocks=1 00:20:44.745 --rc geninfo_unexecuted_blocks=1 00:20:44.745 00:20:44.745 ' 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:44.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.745 --rc genhtml_branch_coverage=1 00:20:44.745 --rc genhtml_function_coverage=1 00:20:44.745 --rc genhtml_legend=1 00:20:44.745 --rc geninfo_all_blocks=1 00:20:44.745 --rc geninfo_unexecuted_blocks=1 00:20:44.745 00:20:44.745 ' 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:44.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.745 --rc genhtml_branch_coverage=1 00:20:44.745 --rc genhtml_function_coverage=1 00:20:44.745 --rc genhtml_legend=1 00:20:44.745 --rc geninfo_all_blocks=1 00:20:44.745 --rc geninfo_unexecuted_blocks=1 00:20:44.745 00:20:44.745 ' 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:44.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.745 --rc genhtml_branch_coverage=1 00:20:44.745 --rc genhtml_function_coverage=1 00:20:44.745 --rc genhtml_legend=1 00:20:44.745 --rc geninfo_all_blocks=1 00:20:44.745 --rc geninfo_unexecuted_blocks=1 00:20:44.745 00:20:44.745 ' 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:44.745 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:44.745 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@280 -- # nvmf_veth_init 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@223 -- # create_target_ns 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # create_main_bridge 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@105 -- # delete_main_bridge 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up initiator0 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up target0 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0 up 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up target0_br 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns target0 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:20:44.746 10.0.0.1 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:44.746 10.0.0.2 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up initiator0 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:44.746 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up target0_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up initiator1 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up target1 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1 up 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up target1_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns target1 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772163 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:20:44.747 10.0.0.3 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772164 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:44.747 10.0.0.4 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up initiator1 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:20:44.747 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up target1_br 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:44.748 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator0 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:45.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:20:45.006 00:20:45.006 --- 10.0.0.1 ping statistics --- 00:20:45.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.006 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target0 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target0 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:45.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.024 ms 00:20:45.006 00:20:45.006 --- 10.0.0.2 ping statistics --- 00:20:45.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.006 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:45.006 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:45.006 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:45.006 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:45.006 00:20:45.007 --- 10.0.0.3 ping statistics --- 00:20:45.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.007 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target1 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target1 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:45.007 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:20:45.007 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:45.007 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:20:45.007 00:20:45.007 --- 10.0.0.4 ping statistics --- 00:20:45.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.007 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # return 0 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator0 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target0 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target0 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target1 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:20:45.007 ' 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=64250 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 64250 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 64250 ']' 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:45.007 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:45.007 [2024-11-20 07:18:09.109623] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:45.008 [2024-11-20 07:18:09.109689] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.276 [2024-11-20 07:18:09.252007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.276 [2024-11-20 07:18:09.286197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.276 [2024-11-20 07:18:09.286249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.276 [2024-11-20 07:18:09.286256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.276 [2024-11-20 07:18:09.286261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.276 [2024-11-20 07:18:09.286265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.276 [2024-11-20 07:18:09.286525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.276 [2024-11-20 07:18:09.316802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:45.841 [2024-11-20 07:18:09.967730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:45.841 [2024-11-20 07:18:09.983806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.841 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:45.841 malloc0 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:45.841 { 00:20:45.841 "params": { 00:20:45.841 "name": "Nvme$subsystem", 00:20:45.841 "trtype": "$TEST_TRANSPORT", 00:20:45.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.841 "adrfam": "ipv4", 00:20:45.841 "trsvcid": "$NVMF_PORT", 00:20:45.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.841 "hdgst": ${hdgst:-false}, 00:20:45.841 "ddgst": ${ddgst:-false} 00:20:45.841 }, 00:20:45.841 "method": "bdev_nvme_attach_controller" 00:20:45.841 } 00:20:45.841 EOF 00:20:45.841 )") 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:20:45.841 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:20:45.841 "params": { 00:20:45.841 "name": "Nvme1", 00:20:45.841 "trtype": "tcp", 00:20:45.841 "traddr": "10.0.0.2", 00:20:45.841 "adrfam": "ipv4", 00:20:45.841 "trsvcid": "4420", 00:20:45.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.841 "hdgst": false, 00:20:45.841 "ddgst": false 00:20:45.841 }, 00:20:45.841 "method": "bdev_nvme_attach_controller" 00:20:45.841 }' 00:20:46.099 [2024-11-20 07:18:10.051881] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:46.099 [2024-11-20 07:18:10.051945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64283 ] 00:20:46.099 [2024-11-20 07:18:10.193132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.099 [2024-11-20 07:18:10.227992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.099 [2024-11-20 07:18:10.265960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:46.356 Running I/O for 10 seconds... 00:20:48.222 6801.00 IOPS, 53.13 MiB/s [2024-11-20T07:18:13.796Z] 7219.50 IOPS, 56.40 MiB/s [2024-11-20T07:18:14.728Z] 7762.33 IOPS, 60.64 MiB/s [2024-11-20T07:18:15.661Z] 8006.00 IOPS, 62.55 MiB/s [2024-11-20T07:18:16.597Z] 8147.80 IOPS, 63.65 MiB/s [2024-11-20T07:18:17.530Z] 8253.17 IOPS, 64.48 MiB/s [2024-11-20T07:18:18.463Z] 8334.71 IOPS, 65.11 MiB/s [2024-11-20T07:18:19.420Z] 8403.38 IOPS, 65.65 MiB/s [2024-11-20T07:18:20.794Z] 8449.11 IOPS, 66.01 MiB/s [2024-11-20T07:18:20.794Z] 8476.20 IOPS, 66.22 MiB/s 00:20:56.591 Latency(us) 00:20:56.591 [2024-11-20T07:18:20.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.591 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:56.591 Verification LBA range: start 0x0 length 0x1000 00:20:56.591 Nvme1n1 : 10.01 8479.21 66.24 0.00 0.00 15053.00 368.64 27222.65 00:20:56.591 [2024-11-20T07:18:20.794Z] =================================================================================================================== 00:20:56.591 [2024-11-20T07:18:20.794Z] Total : 8479.21 66.24 0.00 0.00 15053.00 368.64 27222.65 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=64400 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:56.591 { 00:20:56.591 "params": { 00:20:56.591 "name": "Nvme$subsystem", 00:20:56.591 "trtype": "$TEST_TRANSPORT", 00:20:56.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.591 "adrfam": "ipv4", 00:20:56.591 "trsvcid": "$NVMF_PORT", 00:20:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.591 "hdgst": ${hdgst:-false}, 00:20:56.591 "ddgst": ${ddgst:-false} 00:20:56.591 }, 00:20:56.591 "method": "bdev_nvme_attach_controller" 00:20:56.591 } 00:20:56.591 EOF 00:20:56.591 )") 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:20:56.591 [2024-11-20 07:18:20.490406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.591 [2024-11-20 07:18:20.490439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:20:56.591 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:20:56.591 "params": { 00:20:56.591 "name": "Nvme1", 00:20:56.591 "trtype": "tcp", 00:20:56.591 "traddr": "10.0.0.2", 00:20:56.591 "adrfam": "ipv4", 00:20:56.591 "trsvcid": "4420", 00:20:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.591 "hdgst": false, 00:20:56.591 "ddgst": false 00:20:56.592 }, 00:20:56.592 "method": "bdev_nvme_attach_controller" 00:20:56.592 }' 00:20:56.592 [2024-11-20 07:18:20.498378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.498394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.506376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.506395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.514361] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:56.592 [2024-11-20 07:18:20.514378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.514393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.514408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64400 ] 00:20:56.592 [2024-11-20 07:18:20.522380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.522396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.530384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.530401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.538386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.538403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.546385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.546402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.554389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.554405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.562392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.562413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.570391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.570408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.578396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.578416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.586396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.586415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.594398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.594416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.602401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.602421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.610403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.610425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.618403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.618421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.626405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.626424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.634407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.634425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.642408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.642425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.646599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.592 [2024-11-20 07:18:20.650408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.650425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.658412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.658432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.666415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.666433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.674414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.674433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.678111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.592 [2024-11-20 07:18:20.682414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.682431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.690426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.690448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.698426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.698445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.706425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.706441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.714429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.714448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.715542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:56.592 [2024-11-20 07:18:20.722431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.722450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.730431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.730450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.738433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.738450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.746445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.746469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.754448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.754468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.762456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.762478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.770461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.770482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.778461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.778482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.592 [2024-11-20 07:18:20.786466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.592 [2024-11-20 07:18:20.786486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.794466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.794485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.802552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.802576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.810505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.810527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 Running I/O for 5 seconds... 00:20:56.851 [2024-11-20 07:18:20.818507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.818524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.831142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.831167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.841939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.841962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.850771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.850794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.859230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.859254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.868247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.868271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.877189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.877211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.886207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.886237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.895207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.895238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.904110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.904132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.913122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.913147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.921452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.921475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.930403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.930426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.939587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.939612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.946189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.946212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.957034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.957057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.965562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.965583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.974366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.851 [2024-11-20 07:18:20.974388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.851 [2024-11-20 07:18:20.983228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.852 [2024-11-20 07:18:20.983250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.852 [2024-11-20 07:18:20.991639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.852 [2024-11-20 07:18:20.991663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.852 [2024-11-20 07:18:21.000057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.852 [2024-11-20 07:18:21.000079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.852 [2024-11-20 07:18:21.008440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.852 [2024-11-20 07:18:21.008466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.852 [2024-11-20 07:18:21.017597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.852 [2024-11-20 07:18:21.017622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.852 [2024-11-20 07:18:21.026336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.852 [2024-11-20 07:18:21.026358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.852 [2024-11-20 07:18:21.035313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.852 [2024-11-20 07:18:21.035335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.852 [2024-11-20 07:18:21.044143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:56.852 [2024-11-20 07:18:21.044165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.109 [2024-11-20 07:18:21.052909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.109 [2024-11-20 07:18:21.052931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.109 [2024-11-20 07:18:21.061227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.109 [2024-11-20 07:18:21.061252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.109 [2024-11-20 07:18:21.069669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.109 [2024-11-20 07:18:21.069691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.109 [2024-11-20 07:18:21.078751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.109 [2024-11-20 07:18:21.078772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.109 [2024-11-20 07:18:21.086991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.109 [2024-11-20 07:18:21.087013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.109 [2024-11-20 07:18:21.095963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.109 [2024-11-20 07:18:21.095985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.109 [2024-11-20 07:18:21.104956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.109 [2024-11-20 07:18:21.104978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.109 [2024-11-20 07:18:21.113983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.109 [2024-11-20 07:18:21.114006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.109 [2024-11-20 07:18:21.122956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.109 [2024-11-20 07:18:21.122978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.109 [2024-11-20 07:18:21.131929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.109 [2024-11-20 07:18:21.131952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.109 [2024-11-20 07:18:21.140183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.109 [2024-11-20 07:18:21.140205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.149168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.149189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.158144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.158169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.167139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.167162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.175445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.175468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.184597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.184618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.192946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.192967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.201409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.201433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.209791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.209816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.218880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.218902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.227919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.227943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.236952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.236973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.245334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.245356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.254183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.254207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.262547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.262570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.270951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.270975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.279318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.279341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.285876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.285898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.297066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.297087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.110 [2024-11-20 07:18:21.305876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.110 [2024-11-20 07:18:21.305900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.367 [2024-11-20 07:18:21.314357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.367 [2024-11-20 07:18:21.314379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.367 [2024-11-20 07:18:21.323518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.323541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.331961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.331985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.340967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.340990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.349898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.349921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.358952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.358976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.367429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.367451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.376500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.376522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.384919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.384940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.393935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.393958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.402949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.402975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.411845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.411869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.420988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.421009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.430026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.430048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.439154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.439177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.445775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.445797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.456638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.456661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.465281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.465302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.474150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.474172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.483211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.483241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.492262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.492283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.501465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.501494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.510508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.510532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.519489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.519511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.528354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.528377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.536692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.536714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.545063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.545087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.554161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.554186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.368 [2024-11-20 07:18:21.562436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.368 [2024-11-20 07:18:21.562457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.571314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.571336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.579665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.579687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.588626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.588647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.597707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.597729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.606624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.606646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.615046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.615068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.624024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.624046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.632335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.632356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.641498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.641521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.650621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.650647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.659689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.659712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.667996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.668019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.676462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.676484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.685624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.685646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.694011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.694035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.702368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.702391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.711259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.626 [2024-11-20 07:18:21.711280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.626 [2024-11-20 07:18:21.720409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.627 [2024-11-20 07:18:21.720431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.627 [2024-11-20 07:18:21.728809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.627 [2024-11-20 07:18:21.728830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.627 [2024-11-20 07:18:21.737864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.627 [2024-11-20 07:18:21.737886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.627 [2024-11-20 07:18:21.746911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.627 [2024-11-20 07:18:21.746936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.627 [2024-11-20 07:18:21.756037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.627 [2024-11-20 07:18:21.756061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.627 [2024-11-20 07:18:21.764377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.627 [2024-11-20 07:18:21.764397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.627 [2024-11-20 07:18:21.773255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.627 [2024-11-20 07:18:21.773277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.627 [2024-11-20 07:18:21.781545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.627 [2024-11-20 07:18:21.781568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.627 [2024-11-20 07:18:21.790617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.627 [2024-11-20 07:18:21.790640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.627 [2024-11-20 07:18:21.799604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.627 [2024-11-20 07:18:21.799632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.627 [2024-11-20 07:18:21.808800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.627 [2024-11-20 07:18:21.808824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.627 16937.00 IOPS, 132.32 MiB/s [2024-11-20T07:18:21.830Z] [2024-11-20 07:18:21.817736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.627 [2024-11-20 07:18:21.817758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.826614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.826637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.835085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.835108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.844231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.844254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.853321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.853344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.862254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.862276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.870728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.870752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.879239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.879261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.887597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.887619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.895920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.895944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.904318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.904341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.913409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.913430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.919992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.920015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.931065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.931088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.939947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.939969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.946877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.947014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.957050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.957072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.965815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.965906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.974277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.974298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.983318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.983339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:21.992444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:21.992559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:22.000962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:22.000983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:22.009286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:22.009306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:22.018404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:22.018425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:22.027409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:22.027505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:22.036323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:22.036343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:22.045282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:22.045305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:22.054111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:22.054132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:22.063088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.885 [2024-11-20 07:18:22.063198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.885 [2024-11-20 07:18:22.071815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.886 [2024-11-20 07:18:22.071835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:57.886 [2024-11-20 07:18:22.080792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:57.886 [2024-11-20 07:18:22.080813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.089948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.089970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.098970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.099090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.107434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.107454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.116451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.116471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.125373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.125460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.133784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.133805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.142764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.142787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.151859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.151881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.160709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.160812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.169819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.169841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.178764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.178785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.187777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.187871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.194491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.194574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.205324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.205402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.214143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.214165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.223233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.223254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.232287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.232376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.240724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.240750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.249251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.249272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.258376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.258397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.267455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.267476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.276532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.276560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.285545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.285567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.293957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.293981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.302400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.302490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.311586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.311606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.320551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.320571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.329444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.329531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.338498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.338521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.347263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.347285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.355821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.355843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.172 [2024-11-20 07:18:22.364948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.172 [2024-11-20 07:18:22.365042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.373975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.373996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.382993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.383013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.391982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.392102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.400411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.400432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.409556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.409576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.418619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.418712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.427732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.427754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.436891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.436913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.445320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.445428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.453734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.453755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.462096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.462117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.471171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.471193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.480181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.480202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.488942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.432 [2024-11-20 07:18:22.489042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.432 [2024-11-20 07:18:22.497408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.497429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.506440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.506460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.515466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.515565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.524573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.524594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.533274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.533294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.542480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.542589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.551426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.551448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.559857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.559878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.568170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.568191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.577146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.577167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.586188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.586293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.594733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.594755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.603190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.603211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.612446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.612467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.621527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.621618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.433 [2024-11-20 07:18:22.630009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.433 [2024-11-20 07:18:22.630030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.636625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.636712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.647576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.647664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.656272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.656293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.664718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.664740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.674109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.674130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.683209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.683315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.692411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.692435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.700829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.700849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.709909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.709930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.718181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.718202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.727254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.727274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.736207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.736242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.744635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.744655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.752946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.752966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.761328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.761348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.770112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.770135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.779124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.779238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.788165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.788190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.797381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.797402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.806469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.806561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.815059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.815080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 16957.50 IOPS, 132.48 MiB/s [2024-11-20T07:18:22.893Z] [2024-11-20 07:18:22.823429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.823450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.690 [2024-11-20 07:18:22.831790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.690 [2024-11-20 07:18:22.831811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.691 [2024-11-20 07:18:22.840217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.691 [2024-11-20 07:18:22.840246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.691 [2024-11-20 07:18:22.848690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.691 [2024-11-20 07:18:22.848711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.691 [2024-11-20 07:18:22.857570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.691 [2024-11-20 07:18:22.857673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.691 [2024-11-20 07:18:22.866097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.691 [2024-11-20 07:18:22.866119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.691 [2024-11-20 07:18:22.875248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.691 [2024-11-20 07:18:22.875268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.691 [2024-11-20 07:18:22.884050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.691 [2024-11-20 07:18:22.884072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.892390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.892411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.901277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.901297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.910410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.910431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.918881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.918902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.927957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.928057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.936454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.936479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.944857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.944878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.953228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.953249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.961618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.961639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.970015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.970035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.978340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.978436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.986858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.986883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:22.995281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:22.995301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.003679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.003700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.011963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.011984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.020962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.020983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.030136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.030258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.039253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.039275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.047797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.047818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.056860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.056884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.065693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.065789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.074830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.074850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.083294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.083314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.091740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.091762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.100531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.100635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.109124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.109147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.118087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.118109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.127042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.127064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.136191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.948 [2024-11-20 07:18:23.136318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.948 [2024-11-20 07:18:23.145437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.949 [2024-11-20 07:18:23.145459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.205 [2024-11-20 07:18:23.154540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.205 [2024-11-20 07:18:23.154561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.205 [2024-11-20 07:18:23.163719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.205 [2024-11-20 07:18:23.163806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.205 [2024-11-20 07:18:23.172107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.205 [2024-11-20 07:18:23.172128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.205 [2024-11-20 07:18:23.181175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.205 [2024-11-20 07:18:23.181198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.205 [2024-11-20 07:18:23.189597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.205 [2024-11-20 07:18:23.189618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.205 [2024-11-20 07:18:23.198643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.205 [2024-11-20 07:18:23.198733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.205 [2024-11-20 07:18:23.207023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.205 [2024-11-20 07:18:23.207044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.205 [2024-11-20 07:18:23.215831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.205 [2024-11-20 07:18:23.215851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.205 [2024-11-20 07:18:23.224255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.224275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.233272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.233294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.241746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.241766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.250819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.250841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.259172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.259193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.268287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.268383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.277310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.277331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.286489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.286512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.294917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.294942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.303335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.303424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.312298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.312318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.321345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.321367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.329651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.329672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.338683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.338771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.347185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.347206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.356243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.356263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.365193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.365214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.373562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.373583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.382399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.382420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.391382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.391402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.206 [2024-11-20 07:18:23.400442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.206 [2024-11-20 07:18:23.400552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.409522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.409543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.417930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.417951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.426404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.426424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.435423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.435536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.444655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.444676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.452987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.453007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.462174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.462195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.471146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.471255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.480183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.480206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.489240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.489260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.497632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.497653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.506745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.506833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.515169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.515191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.524123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.524144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.533046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.533068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.542112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.542204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.550461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.550481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.558837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.558858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.567672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.567693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.576073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.576095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.584486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.584508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.592857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.592877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.601874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.601979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.610980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.611002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.620027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.620047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.634501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.634526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.643333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.643355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.651721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.651743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.464 [2024-11-20 07:18:23.660872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.464 [2024-11-20 07:18:23.660970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.669892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.669914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.678272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.678293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.687381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.687472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.695753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.695773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.704798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.704820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.713076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.713097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.722051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.722072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.730479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.730595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.739482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.739503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.748414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.748435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.757286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.757380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.766959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.766980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.775332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.775353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.784448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.784543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.793501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.793522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.801834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.801854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.810819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.810839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 16981.33 IOPS, 132.67 MiB/s [2024-11-20T07:18:23.926Z] [2024-11-20 07:18:23.819814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.819903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.828777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.828801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.837785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.837807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.846791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.846880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.855174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.855195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.863981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.864002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.873041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.873062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.882040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.882166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.891100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.891121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.900184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.900205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.909007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.909100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.723 [2024-11-20 07:18:23.915637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.723 [2024-11-20 07:18:23.915715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:23.926330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:23.926422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:23.935307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:23.935327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:23.944340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:23.944360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:23.952660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:23.952680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:23.961736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:23.961757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:23.970811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:23.970832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:23.979174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:23.979195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:23.987538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:23.987558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:23.996485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:23.996595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:24.005605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:24.005626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:24.014476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:24.014496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:24.023477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:24.023579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:24.032743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:24.032764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:24.041691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:24.041711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:24.050690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:24.050776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:24.059898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:24.059920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:24.069061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:24.069081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:24.078228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:24.078248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:24.087227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:24.087247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.981 [2024-11-20 07:18:24.095606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.981 [2024-11-20 07:18:24.095626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.982 [2024-11-20 07:18:24.104575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.982 [2024-11-20 07:18:24.104595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.982 [2024-11-20 07:18:24.113507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.982 [2024-11-20 07:18:24.113594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.982 [2024-11-20 07:18:24.122576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.982 [2024-11-20 07:18:24.122598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.982 [2024-11-20 07:18:24.130955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.982 [2024-11-20 07:18:24.130976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.982 [2024-11-20 07:18:24.139355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.982 [2024-11-20 07:18:24.139374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.982 [2024-11-20 07:18:24.148785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.982 [2024-11-20 07:18:24.148804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.982 [2024-11-20 07:18:24.157848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.982 [2024-11-20 07:18:24.157867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.982 [2024-11-20 07:18:24.166928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.982 [2024-11-20 07:18:24.166947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.982 [2024-11-20 07:18:24.173556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.982 [2024-11-20 07:18:24.173578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.240 [2024-11-20 07:18:24.184535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.240 [2024-11-20 07:18:24.184560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.240 [2024-11-20 07:18:24.193236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.240 [2024-11-20 07:18:24.193254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.240 [2024-11-20 07:18:24.201658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.240 [2024-11-20 07:18:24.201676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.240 [2024-11-20 07:18:24.210565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.240 [2024-11-20 07:18:24.210583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.240 [2024-11-20 07:18:24.219587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.240 [2024-11-20 07:18:24.219606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.240 [2024-11-20 07:18:24.228287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.240 [2024-11-20 07:18:24.228303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.240 [2024-11-20 07:18:24.237207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.240 [2024-11-20 07:18:24.237233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.240 [2024-11-20 07:18:24.246105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.240 [2024-11-20 07:18:24.246123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.240 [2024-11-20 07:18:24.254276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.240 [2024-11-20 07:18:24.254294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.240 [2024-11-20 07:18:24.263156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.240 [2024-11-20 07:18:24.263175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.240 [2024-11-20 07:18:24.272150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.240 [2024-11-20 07:18:24.272172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.240 [2024-11-20 07:18:24.281051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.240 [2024-11-20 07:18:24.281069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.290009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.290028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.298354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.298373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.306713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.306731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.315784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.315802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.324716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.324736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.333743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.333761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.342140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.342159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.351048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.351068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.359888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.359907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.368243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.368262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.377047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.377066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.385480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.385498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.394465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.394483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.404010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.404028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.412343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.412361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.421434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.421453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.430479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.430498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.241 [2024-11-20 07:18:24.438838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.241 [2024-11-20 07:18:24.438857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.447757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.447775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.456127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.456146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.464938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.464956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.473955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.473975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.482751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.482769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.491649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.491667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.500661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.500678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.509595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.509613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.518618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.518638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.527691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.527710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.536838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.536855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.545228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.545245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.554332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.554351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.562719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.562738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.571186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.571205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.580137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.500 [2024-11-20 07:18:24.580155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.500 [2024-11-20 07:18:24.589175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.589193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.501 [2024-11-20 07:18:24.598209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.598233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.501 [2024-11-20 07:18:24.607240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.607258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.501 [2024-11-20 07:18:24.616095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.616117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.501 [2024-11-20 07:18:24.624900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.624918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.501 [2024-11-20 07:18:24.633358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.633374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.501 [2024-11-20 07:18:24.642903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.642921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.501 [2024-11-20 07:18:24.651693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.651711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.501 [2024-11-20 07:18:24.660630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.660648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.501 [2024-11-20 07:18:24.669672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.669692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.501 [2024-11-20 07:18:24.678655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.678673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.501 [2024-11-20 07:18:24.687685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.687703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.501 [2024-11-20 07:18:24.696130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.501 [2024-11-20 07:18:24.696148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.705055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.705073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.714047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.714066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.722520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.722538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.731456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.731475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.739819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.739838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.748849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.748868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.757246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.757263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.766398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.766420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.775208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.775239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.789606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.789625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.798580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.798599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.807275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.807292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.816239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.816259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 17026.25 IOPS, 133.02 MiB/s [2024-11-20T07:18:24.963Z] [2024-11-20 07:18:24.824894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.824912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.833265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.833283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.841657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.841675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.850474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.850492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.859471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.859503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.868498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.868517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.877336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.877353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.886272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.886288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.895333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.895352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.903646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.903666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.910159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.910178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.921522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.921541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.930231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.930248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.939704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.939722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.948752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.948769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.760 [2024-11-20 07:18:24.957506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.760 [2024-11-20 07:18:24.957524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:24.966559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:24.966578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:24.974961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:24.974979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:24.984045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:24.984063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:24.993145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:24.993162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.002025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.002044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.010418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.010436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.019319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.019338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.028346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.028364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.037307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.037325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.045682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.045700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.054767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.054786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.063116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.063137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.071569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.071587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.080488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.080506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.089551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.089568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.098083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.098102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.107067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.107086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.115543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.115563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.124653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.124671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.133520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.133538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.141894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.141912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.150245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.150263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.158606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.019 [2024-11-20 07:18:25.158624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.019 [2024-11-20 07:18:25.166973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.020 [2024-11-20 07:18:25.166992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.020 [2024-11-20 07:18:25.176042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.020 [2024-11-20 07:18:25.176061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.020 [2024-11-20 07:18:25.184381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.020 [2024-11-20 07:18:25.184399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.020 [2024-11-20 07:18:25.193316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.020 [2024-11-20 07:18:25.193333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.020 [2024-11-20 07:18:25.199853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.020 [2024-11-20 07:18:25.199871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.020 [2024-11-20 07:18:25.210883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.020 [2024-11-20 07:18:25.210903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.219747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.219766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.228115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.228134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.236912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.236930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.245769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.245788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.254437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.254455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.263283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.263302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.271888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.271906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.280847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.280865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.289949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.289968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.298886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.298904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.307824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.307843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.316739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.316757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.325904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.325924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.334703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.334720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.343742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.343763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.352721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.352738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.361776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.361798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.370908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.370927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.379755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.379773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.388087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.388105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.397178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.397197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.406116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.406135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.415140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.415159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.423525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.423543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.432585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.432603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.441622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.441641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.450527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.450546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.459542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.459561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.468681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.468700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.310 [2024-11-20 07:18:25.477702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.310 [2024-11-20 07:18:25.477720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.486252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.486270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.494601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.494620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.503600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.503619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.512069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.512091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.520678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.520696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.529061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.529079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.538187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.538205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.552218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.552245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.560827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.560845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.569214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.569239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.578180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.578198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.586598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.586616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.595039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.595057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.603876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.603894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.612898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.612918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.621349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.621366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.630394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.630413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.639246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.639265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.648712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.648731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.657157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.657176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.666165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.666184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.674562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.674580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.683441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.683460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.692321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.692339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.701481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.701499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.709949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.709968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.718826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.718845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.727939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.727958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.736946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.736963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.745908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.745927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.754916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.754935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.764016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.764035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.773131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.609 [2024-11-20 07:18:25.773149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.609 [2024-11-20 07:18:25.782090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.610 [2024-11-20 07:18:25.782109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.610 [2024-11-20 07:18:25.790403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.610 [2024-11-20 07:18:25.790421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.610 [2024-11-20 07:18:25.798747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.610 [2024-11-20 07:18:25.798765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.610 [2024-11-20 07:18:25.807761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.610 [2024-11-20 07:18:25.807780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.868 [2024-11-20 07:18:25.816688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.868 [2024-11-20 07:18:25.816707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.868 17047.60 IOPS, 133.18 MiB/s [2024-11-20T07:18:26.071Z] [2024-11-20 07:18:25.822837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.868 [2024-11-20 07:18:25.822854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.868 00:21:01.869 Latency(us) 00:21:01.869 [2024-11-20T07:18:26.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.869 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:21:01.869 Nvme1n1 : 5.01 17051.16 133.21 0.00 0.00 7500.87 2823.09 17845.96 00:21:01.869 [2024-11-20T07:18:26.072Z] =================================================================================================================== 00:21:01.869 [2024-11-20T07:18:26.072Z] Total : 17051.16 133.21 0.00 0.00 7500.87 2823.09 17845.96 00:21:01.869 [2024-11-20 07:18:25.830840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.869 [2024-11-20 07:18:25.830857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.869 [2024-11-20 07:18:25.838840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.869 [2024-11-20 07:18:25.838856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.869 [2024-11-20 07:18:25.846840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.869 [2024-11-20 07:18:25.846853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.869 [2024-11-20 07:18:25.854841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.869 [2024-11-20 07:18:25.854857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.869 [2024-11-20 07:18:25.862844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.869 [2024-11-20 07:18:25.862861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.869 [2024-11-20 07:18:25.870845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.869 [2024-11-20 07:18:25.870860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.869 [2024-11-20 07:18:25.878846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.869 [2024-11-20 07:18:25.878859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.869 [2024-11-20 07:18:25.886846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.869 [2024-11-20 07:18:25.886861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.869 [2024-11-20 07:18:25.894848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.869 [2024-11-20 07:18:25.894861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.869 [2024-11-20 07:18:25.902852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.869 [2024-11-20 07:18:25.902865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.869 [2024-11-20 07:18:25.910852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.869 [2024-11-20 07:18:25.910867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.869 [2024-11-20 07:18:25.918855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.869 [2024-11-20 07:18:25.918869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.869 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (64400) - No such process 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 64400 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:01.869 delay0 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.869 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:21:02.128 [2024-11-20 07:18:26.101178] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:21:08.683 Initializing NVMe Controllers 00:21:08.683 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:08.683 Initialization complete. Launching workers. 00:21:08.683 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 240, failed: 33794 00:21:08.683 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33925, failed to submit 109 00:21:08.683 success 33883, unsuccessful 42, failed 0 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:08.941 rmmod nvme_tcp 00:21:08.941 rmmod nvme_fabrics 00:21:08.941 rmmod nvme_keyring 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 64250 ']' 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 64250 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 64250 ']' 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 64250 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64250 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64250' 00:21:08.941 killing process with pid 64250 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 64250 00:21:08.941 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 64250 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:21:08.942 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:21:09.200 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # continue 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # continue 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:21:09.201 00:21:09.201 real 0m24.669s 00:21:09.201 user 0m41.988s 00:21:09.201 sys 0m5.328s 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:09.201 ************************************ 00:21:09.201 END TEST nvmf_zcopy 00:21:09.201 ************************************ 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:21:09.201 ************************************ 00:21:09.201 START TEST nvmf_nmic 00:21:09.201 ************************************ 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:09.201 * Looking for test storage... 00:21:09.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:21:09.201 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:09.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.491 --rc genhtml_branch_coverage=1 00:21:09.491 --rc genhtml_function_coverage=1 00:21:09.491 --rc genhtml_legend=1 00:21:09.491 --rc geninfo_all_blocks=1 00:21:09.491 --rc geninfo_unexecuted_blocks=1 00:21:09.491 00:21:09.491 ' 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:09.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.491 --rc genhtml_branch_coverage=1 00:21:09.491 --rc genhtml_function_coverage=1 00:21:09.491 --rc genhtml_legend=1 00:21:09.491 --rc geninfo_all_blocks=1 00:21:09.491 --rc geninfo_unexecuted_blocks=1 00:21:09.491 00:21:09.491 ' 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:09.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.491 --rc genhtml_branch_coverage=1 00:21:09.491 --rc genhtml_function_coverage=1 00:21:09.491 --rc genhtml_legend=1 00:21:09.491 --rc geninfo_all_blocks=1 00:21:09.491 --rc geninfo_unexecuted_blocks=1 00:21:09.491 00:21:09.491 ' 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:09.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.491 --rc genhtml_branch_coverage=1 00:21:09.491 --rc genhtml_function_coverage=1 00:21:09.491 --rc genhtml_legend=1 00:21:09.491 --rc geninfo_all_blocks=1 00:21:09.491 --rc geninfo_unexecuted_blocks=1 00:21:09.491 00:21:09.491 ' 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.491 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:09.492 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@280 -- # nvmf_veth_init 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@223 -- # create_target_ns 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # create_main_bridge 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@105 -- # delete_main_bridge 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up initiator0 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up target0 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.492 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0 up 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up target0_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns target0 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:21:09.493 10.0.0.1 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:21:09.493 10.0.0.2 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up initiator0 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up target0_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up initiator1 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up target1 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1 up 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up target1_br 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns target1 00:21:09.493 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772163 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:21:09.494 10.0.0.3 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772164 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:21:09.494 10.0.0.4 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up initiator1 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up target1_br 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 2 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator0 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:09.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:21:09.494 00:21:09.494 --- 10.0.0.1 ping statistics --- 00:21:09.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.494 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target0 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target0 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:09.494 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:21:09.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:21:09.768 00:21:09.768 --- 10.0.0.2 ping statistics --- 00:21:09.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.768 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:21:09.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:09.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:21:09.768 00:21:09.768 --- 10.0.0.3 ping statistics --- 00:21:09.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.768 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:21:09.768 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:09.768 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:21:09.768 00:21:09.768 --- 10.0.0.4 ping statistics --- 00:21:09.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.768 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # return 0 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:09.768 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator0 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target0 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target0 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target1 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:21:09.769 ' 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=64776 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 64776 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 64776 ']' 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:09.769 07:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:09.769 [2024-11-20 07:18:33.823121] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:09.769 [2024-11-20 07:18:33.823181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.769 [2024-11-20 07:18:33.952109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.028 [2024-11-20 07:18:33.984170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.028 [2024-11-20 07:18:33.984212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.028 [2024-11-20 07:18:33.984217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.028 [2024-11-20 07:18:33.984230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.028 [2024-11-20 07:18:33.984233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.028 [2024-11-20 07:18:33.984848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.028 [2024-11-20 07:18:33.985154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.028 [2024-11-20 07:18:33.985465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.028 [2024-11-20 07:18:33.985471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.028 [2024-11-20 07:18:34.013333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:10.598 [2024-11-20 07:18:34.687729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:10.598 Malloc0 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:10.598 [2024-11-20 07:18:34.742673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.598 test case1: single bdev can't be used in multiple subsystems 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:10.598 [2024-11-20 07:18:34.766589] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:21:10.598 [2024-11-20 07:18:34.766610] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:21:10.598 [2024-11-20 07:18:34.766615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.598 request: 00:21:10.598 { 00:21:10.598 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:21:10.598 "namespace": { 00:21:10.598 "bdev_name": "Malloc0", 00:21:10.598 "no_auto_visible": false 00:21:10.598 }, 00:21:10.598 "method": "nvmf_subsystem_add_ns", 00:21:10.598 "req_id": 1 00:21:10.598 } 00:21:10.598 Got JSON-RPC error response 00:21:10.598 response: 00:21:10.598 { 00:21:10.598 "code": -32602, 00:21:10.598 "message": "Invalid parameters" 00:21:10.598 } 00:21:10.598 Adding namespace failed - expected result. 00:21:10.598 test case2: host connect to nvmf target in multiple paths 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:10.598 [2024-11-20 07:18:34.778667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.598 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid=6878406f-1821-4d15-bee4-f9cf994eb227 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:10.857 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid=6878406f-1821-4d15-bee4-f9cf994eb227 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:21:10.857 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:21:10.857 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:21:10.857 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:21:10.857 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:21:10.857 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:21:13.382 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:21:13.382 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:21:13.382 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:21:13.382 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:21:13.382 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:21:13.382 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:21:13.382 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:13.382 [global] 00:21:13.382 thread=1 00:21:13.382 invalidate=1 00:21:13.382 rw=write 00:21:13.382 time_based=1 00:21:13.382 runtime=1 00:21:13.382 ioengine=libaio 00:21:13.382 direct=1 00:21:13.382 bs=4096 00:21:13.382 iodepth=1 00:21:13.382 norandommap=0 00:21:13.382 numjobs=1 00:21:13.382 00:21:13.382 verify_dump=1 00:21:13.382 verify_backlog=512 00:21:13.382 verify_state_save=0 00:21:13.382 do_verify=1 00:21:13.382 verify=crc32c-intel 00:21:13.382 [job0] 00:21:13.382 filename=/dev/nvme0n1 00:21:13.382 Could not set queue depth (nvme0n1) 00:21:13.382 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:13.382 fio-3.35 00:21:13.382 Starting 1 thread 00:21:14.317 00:21:14.317 job0: (groupid=0, jobs=1): err= 0: pid=64866: Wed Nov 20 07:18:38 2024 00:21:14.317 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:21:14.318 slat (nsec): min=5067, max=66277, avg=6869.57, stdev=3875.65 00:21:14.318 clat (usec): min=82, max=512, avg=123.08, stdev=20.82 00:21:14.318 lat (usec): min=87, max=528, avg=129.95, stdev=21.90 00:21:14.318 clat percentiles (usec): 00:21:14.318 | 1.00th=[ 89], 5.00th=[ 96], 10.00th=[ 101], 20.00th=[ 106], 00:21:14.318 | 30.00th=[ 112], 40.00th=[ 118], 50.00th=[ 123], 60.00th=[ 128], 00:21:14.318 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 151], 00:21:14.318 | 99.00th=[ 167], 99.50th=[ 184], 99.90th=[ 338], 99.95th=[ 343], 00:21:14.318 | 99.99th=[ 515] 00:21:14.318 write: IOPS=4608, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:21:14.318 slat (nsec): min=7295, max=98964, avg=10590.87, stdev=4570.76 00:21:14.318 clat (usec): min=49, max=301, avg=74.49, stdev=13.23 00:21:14.318 lat (usec): min=61, max=311, avg=85.09, stdev=14.66 00:21:14.318 clat percentiles (usec): 00:21:14.318 | 1.00th=[ 54], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 63], 00:21:14.318 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 76], 60.00th=[ 79], 00:21:14.318 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 89], 95.00th=[ 92], 00:21:14.318 | 99.00th=[ 102], 99.50th=[ 111], 99.90th=[ 198], 99.95th=[ 258], 00:21:14.318 | 99.99th=[ 302] 00:21:14.318 bw ( KiB/s): min=20480, max=20480, per=100.00%, avg=20480.00, stdev= 0.00, samples=1 00:21:14.318 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:21:14.318 lat (usec) : 50=0.01%, 100=54.15%, 250=45.68%, 500=0.15%, 750=0.01% 00:21:14.318 cpu : usr=1.10%, sys=7.30%, ctx=9222, majf=0, minf=5 00:21:14.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:14.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 issued rwts: total=4608,4613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:14.318 00:21:14.318 Run status group 0 (all jobs): 00:21:14.318 READ: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:21:14.318 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:21:14.318 00:21:14.318 Disk stats (read/write): 00:21:14.318 nvme0n1: ios=4146/4194, merge=0/0, ticks=520/336, in_queue=856, util=91.08% 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:14.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:14.318 rmmod nvme_tcp 00:21:14.318 rmmod nvme_fabrics 00:21:14.318 rmmod nvme_keyring 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 64776 ']' 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 64776 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 64776 ']' 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 64776 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.318 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64776 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.576 killing process with pid 64776 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64776' 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 64776 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 64776 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:14.576 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:21:14.577 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # continue 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # continue 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:21:14.836 00:21:14.836 real 0m5.514s 00:21:14.836 user 0m18.001s 00:21:14.836 sys 0m1.601s 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:14.836 ************************************ 00:21:14.836 END TEST nvmf_nmic 00:21:14.836 ************************************ 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:21:14.836 ************************************ 00:21:14.836 START TEST nvmf_fio_target 00:21:14.836 ************************************ 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:14.836 * Looking for test storage... 00:21:14.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:21:14.836 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:14.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.837 --rc genhtml_branch_coverage=1 00:21:14.837 --rc genhtml_function_coverage=1 00:21:14.837 --rc genhtml_legend=1 00:21:14.837 --rc geninfo_all_blocks=1 00:21:14.837 --rc geninfo_unexecuted_blocks=1 00:21:14.837 00:21:14.837 ' 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:14.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.837 --rc genhtml_branch_coverage=1 00:21:14.837 --rc genhtml_function_coverage=1 00:21:14.837 --rc genhtml_legend=1 00:21:14.837 --rc geninfo_all_blocks=1 00:21:14.837 --rc geninfo_unexecuted_blocks=1 00:21:14.837 00:21:14.837 ' 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:14.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.837 --rc genhtml_branch_coverage=1 00:21:14.837 --rc genhtml_function_coverage=1 00:21:14.837 --rc genhtml_legend=1 00:21:14.837 --rc geninfo_all_blocks=1 00:21:14.837 --rc geninfo_unexecuted_blocks=1 00:21:14.837 00:21:14.837 ' 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:14.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.837 --rc genhtml_branch_coverage=1 00:21:14.837 --rc genhtml_function_coverage=1 00:21:14.837 --rc genhtml_legend=1 00:21:14.837 --rc geninfo_all_blocks=1 00:21:14.837 --rc geninfo_unexecuted_blocks=1 00:21:14.837 00:21:14.837 ' 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:14.837 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@223 -- # create_target_ns 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.837 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:14.838 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:14.838 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:14.838 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:14.838 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:21:14.838 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:21:15.098 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up target0 00:21:15.098 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:21:15.098 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:15.098 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:21:15.098 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:21:15.098 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:21:15.098 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:21:15.099 10.0.0.1 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:21:15.099 10.0.0.2 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up target1 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:15.099 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772163 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:21:15.100 10.0.0.3 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772164 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:21:15.100 10.0.0.4 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator0 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:15.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:21:15.100 00:21:15.100 --- 10.0.0.1 ping statistics --- 00:21:15.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.100 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target0 00:21:15.100 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target0 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:21:15.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:21:15.101 00:21:15.101 --- 10.0.0.2 ping statistics --- 00:21:15.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.101 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:21:15.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:15.101 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:21:15.101 00:21:15.101 --- 10.0.0.3 ping statistics --- 00:21:15.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.101 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:21:15.101 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:15.101 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:21:15.101 00:21:15.101 --- 10.0.0.4 ping statistics --- 00:21:15.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.101 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # return 0 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator0 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:15.101 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target0 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target0 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target1 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target1 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:15.102 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:15.360 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:15.360 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:21:15.360 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:21:15.360 ' 00:21:15.360 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.360 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=65100 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 65100 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 65100 ']' 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.361 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.361 [2024-11-20 07:18:39.356705] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:15.361 [2024-11-20 07:18:39.356764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.361 [2024-11-20 07:18:39.488071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.361 [2024-11-20 07:18:39.518769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.361 [2024-11-20 07:18:39.518807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.361 [2024-11-20 07:18:39.518812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.361 [2024-11-20 07:18:39.518816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.361 [2024-11-20 07:18:39.518820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.361 [2024-11-20 07:18:39.519438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.361 [2024-11-20 07:18:39.519530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.361 [2024-11-20 07:18:39.520015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.361 [2024-11-20 07:18:39.520019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.361 [2024-11-20 07:18:39.547869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:16.294 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.294 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:21:16.294 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:16.294 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.294 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.295 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.295 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:16.295 [2024-11-20 07:18:40.446941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.295 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:16.553 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:21:16.553 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:16.811 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:21:16.811 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:17.069 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:21:17.069 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:17.326 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:21:17.326 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:21:17.326 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:17.584 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:21:17.584 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:17.841 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:21:17.841 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:18.099 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:21:18.099 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:21:18.356 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:18.356 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:18.356 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:18.614 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:18.614 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:18.872 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.129 [2024-11-20 07:18:43.105198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.129 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:21:19.129 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:21:19.445 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid=6878406f-1821-4d15-bee4-f9cf994eb227 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:19.445 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:21:19.445 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:21:19.445 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:21:19.445 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:21:19.445 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:21:19.445 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:21:21.359 07:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:21:21.619 07:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:21:21.619 07:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:21:21.619 07:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:21:21.619 07:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:21:21.619 07:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:21:21.619 07:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:21.619 [global] 00:21:21.619 thread=1 00:21:21.619 invalidate=1 00:21:21.619 rw=write 00:21:21.619 time_based=1 00:21:21.619 runtime=1 00:21:21.619 ioengine=libaio 00:21:21.619 direct=1 00:21:21.619 bs=4096 00:21:21.619 iodepth=1 00:21:21.619 norandommap=0 00:21:21.619 numjobs=1 00:21:21.619 00:21:21.619 verify_dump=1 00:21:21.619 verify_backlog=512 00:21:21.619 verify_state_save=0 00:21:21.619 do_verify=1 00:21:21.619 verify=crc32c-intel 00:21:21.619 [job0] 00:21:21.619 filename=/dev/nvme0n1 00:21:21.619 [job1] 00:21:21.619 filename=/dev/nvme0n2 00:21:21.619 [job2] 00:21:21.619 filename=/dev/nvme0n3 00:21:21.619 [job3] 00:21:21.619 filename=/dev/nvme0n4 00:21:21.619 Could not set queue depth (nvme0n1) 00:21:21.619 Could not set queue depth (nvme0n2) 00:21:21.619 Could not set queue depth (nvme0n3) 00:21:21.619 Could not set queue depth (nvme0n4) 00:21:21.619 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:21.619 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:21.619 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:21.619 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:21.619 fio-3.35 00:21:21.619 Starting 4 threads 00:21:23.059 00:21:23.059 job0: (groupid=0, jobs=1): err= 0: pid=65275: Wed Nov 20 07:18:46 2024 00:21:23.059 read: IOPS=3294, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1001msec) 00:21:23.059 slat (nsec): min=3965, max=23555, avg=5464.07, stdev=1219.27 00:21:23.059 clat (usec): min=100, max=355, avg=150.25, stdev=12.72 00:21:23.059 lat (usec): min=106, max=361, avg=155.72, stdev=12.77 00:21:23.059 clat percentiles (usec): 00:21:23.059 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:21:23.059 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:21:23.059 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 174], 00:21:23.059 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 210], 99.95th=[ 293], 00:21:23.059 | 99.99th=[ 355] 00:21:23.059 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:21:23.059 slat (usec): min=5, max=104, avg= 9.50, stdev= 3.65 00:21:23.059 clat (usec): min=62, max=348, avg=124.72, stdev=13.55 00:21:23.059 lat (usec): min=84, max=357, avg=134.23, stdev=14.37 00:21:23.059 clat percentiles (usec): 00:21:23.059 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 116], 00:21:23.059 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 126], 00:21:23.059 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 147], 00:21:23.059 | 99.00th=[ 167], 99.50th=[ 182], 99.90th=[ 247], 99.95th=[ 310], 00:21:23.059 | 99.99th=[ 351] 00:21:23.059 bw ( KiB/s): min=15272, max=15272, per=21.95%, avg=15272.00, stdev= 0.00, samples=1 00:21:23.059 iops : min= 3818, max= 3818, avg=3818.00, stdev= 0.00, samples=1 00:21:23.059 lat (usec) : 100=0.15%, 250=99.77%, 500=0.09% 00:21:23.059 cpu : usr=1.70%, sys=4.30%, ctx=6886, majf=0, minf=17 00:21:23.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.059 issued rwts: total=3298,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:23.059 job1: (groupid=0, jobs=1): err= 0: pid=65276: Wed Nov 20 07:18:46 2024 00:21:23.059 read: IOPS=5119, BW=20.0MiB/s (21.0MB/s)(20.0MiB/1001msec) 00:21:23.059 slat (nsec): min=5284, max=25332, avg=5869.56, stdev=982.72 00:21:23.059 clat (usec): min=73, max=1172, avg=96.69, stdev=18.74 00:21:23.059 lat (usec): min=79, max=1180, avg=102.56, stdev=18.78 00:21:23.059 clat percentiles (usec): 00:21:23.059 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 88], 00:21:23.059 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 98], 00:21:23.059 | 70.00th=[ 101], 80.00th=[ 104], 90.00th=[ 110], 95.00th=[ 116], 00:21:23.059 | 99.00th=[ 128], 99.50th=[ 135], 99.90th=[ 225], 99.95th=[ 260], 00:21:23.059 | 99.99th=[ 1172] 00:21:23.059 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:21:23.059 slat (nsec): min=6666, max=87342, avg=10765.45, stdev=5232.57 00:21:23.059 clat (usec): min=46, max=368, avg=71.87, stdev=12.67 00:21:23.059 lat (usec): min=59, max=377, avg=82.63, stdev=14.63 00:21:23.059 clat percentiles (usec): 00:21:23.059 | 1.00th=[ 57], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 64], 00:21:23.059 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 73], 00:21:23.059 | 70.00th=[ 76], 80.00th=[ 79], 90.00th=[ 85], 95.00th=[ 90], 00:21:23.059 | 99.00th=[ 108], 99.50th=[ 114], 99.90th=[ 237], 99.95th=[ 273], 00:21:23.059 | 99.99th=[ 367] 00:21:23.059 bw ( KiB/s): min=22048, max=22048, per=31.70%, avg=22048.00, stdev= 0.00, samples=1 00:21:23.059 iops : min= 5512, max= 5512, avg=5512.00, stdev= 0.00, samples=1 00:21:23.059 lat (usec) : 50=0.01%, 100=83.96%, 250=15.95%, 500=0.07% 00:21:23.059 lat (msec) : 2=0.01% 00:21:23.059 cpu : usr=1.80%, sys=7.50%, ctx=10757, majf=0, minf=7 00:21:23.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.059 issued rwts: total=5125,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:23.059 job2: (groupid=0, jobs=1): err= 0: pid=65277: Wed Nov 20 07:18:46 2024 00:21:23.059 read: IOPS=3296, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1001msec) 00:21:23.059 slat (nsec): min=4021, max=18062, avg=5164.39, stdev=954.02 00:21:23.059 clat (usec): min=89, max=328, avg=150.55, stdev=13.00 00:21:23.059 lat (usec): min=97, max=333, avg=155.72, stdev=12.93 00:21:23.059 clat percentiles (usec): 00:21:23.059 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:21:23.059 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:21:23.059 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 174], 00:21:23.059 | 99.00th=[ 188], 99.50th=[ 196], 99.90th=[ 227], 99.95th=[ 306], 00:21:23.059 | 99.99th=[ 330] 00:21:23.059 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:21:23.059 slat (usec): min=5, max=144, avg= 8.98, stdev= 4.58 00:21:23.059 clat (usec): min=2, max=377, avg=125.24, stdev=13.89 00:21:23.059 lat (usec): min=99, max=385, avg=134.22, stdev=14.35 00:21:23.059 clat percentiles (usec): 00:21:23.059 | 1.00th=[ 106], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 117], 00:21:23.059 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 126], 00:21:23.059 | 70.00th=[ 129], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:21:23.059 | 99.00th=[ 167], 99.50th=[ 182], 99.90th=[ 277], 99.95th=[ 338], 00:21:23.059 | 99.99th=[ 379] 00:21:23.059 bw ( KiB/s): min=15272, max=15272, per=21.95%, avg=15272.00, stdev= 0.00, samples=1 00:21:23.059 iops : min= 3818, max= 3818, avg=3818.00, stdev= 0.00, samples=1 00:21:23.059 lat (usec) : 4=0.01%, 100=0.15%, 250=99.74%, 500=0.10% 00:21:23.059 cpu : usr=1.50%, sys=4.10%, ctx=6893, majf=0, minf=15 00:21:23.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.059 issued rwts: total=3300,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:23.059 job3: (groupid=0, jobs=1): err= 0: pid=65278: Wed Nov 20 07:18:46 2024 00:21:23.059 read: IOPS=4502, BW=17.6MiB/s (18.4MB/s)(17.6MiB/1001msec) 00:21:23.059 slat (nsec): min=5241, max=59585, avg=6277.18, stdev=2102.14 00:21:23.059 clat (usec): min=76, max=710, avg=115.06, stdev=28.41 00:21:23.059 lat (usec): min=81, max=716, avg=121.34, stdev=28.85 00:21:23.059 clat percentiles (usec): 00:21:23.059 | 1.00th=[ 86], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 98], 00:21:23.059 | 30.00th=[ 103], 40.00th=[ 109], 50.00th=[ 113], 60.00th=[ 118], 00:21:23.059 | 70.00th=[ 122], 80.00th=[ 127], 90.00th=[ 135], 95.00th=[ 141], 00:21:23.059 | 99.00th=[ 165], 99.50th=[ 351], 99.90th=[ 400], 99.95th=[ 412], 00:21:23.059 | 99.99th=[ 709] 00:21:23.059 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:21:23.059 slat (nsec): min=6851, max=70113, avg=10542.26, stdev=4283.56 00:21:23.059 clat (usec): min=56, max=967, avg=86.10, stdev=28.45 00:21:23.060 lat (usec): min=66, max=979, avg=96.65, stdev=28.94 00:21:23.060 clat percentiles (usec): 00:21:23.060 | 1.00th=[ 63], 5.00th=[ 67], 10.00th=[ 70], 20.00th=[ 73], 00:21:23.060 | 30.00th=[ 76], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 87], 00:21:23.060 | 70.00th=[ 90], 80.00th=[ 95], 90.00th=[ 102], 95.00th=[ 110], 00:21:23.060 | 99.00th=[ 153], 99.50th=[ 297], 99.90th=[ 388], 99.95th=[ 523], 00:21:23.060 | 99.99th=[ 971] 00:21:23.060 bw ( KiB/s): min=20480, max=20480, per=29.44%, avg=20480.00, stdev= 0.00, samples=1 00:21:23.060 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:21:23.060 lat (usec) : 100=56.19%, 250=43.06%, 500=0.70%, 750=0.03%, 1000=0.01% 00:21:23.060 cpu : usr=1.30%, sys=6.80%, ctx=9119, majf=0, minf=9 00:21:23.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.060 issued rwts: total=4507,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:23.060 00:21:23.060 Run status group 0 (all jobs): 00:21:23.060 READ: bw=63.3MiB/s (66.4MB/s), 12.9MiB/s-20.0MiB/s (13.5MB/s-21.0MB/s), io=63.4MiB (66.5MB), run=1001-1001msec 00:21:23.060 WRITE: bw=67.9MiB/s (71.2MB/s), 14.0MiB/s-22.0MiB/s (14.7MB/s-23.0MB/s), io=68.0MiB (71.3MB), run=1001-1001msec 00:21:23.060 00:21:23.060 Disk stats (read/write): 00:21:23.060 nvme0n1: ios=3026/3072, merge=0/0, ticks=439/378, in_queue=817, util=89.18% 00:21:23.060 nvme0n2: ios=4657/4822, merge=0/0, ticks=454/368, in_queue=822, util=89.63% 00:21:23.060 nvme0n3: ios=3003/3072, merge=0/0, ticks=448/362, in_queue=810, util=90.04% 00:21:23.060 nvme0n4: ios=4030/4096, merge=0/0, ticks=488/358, in_queue=846, util=90.21% 00:21:23.060 07:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:21:23.060 [global] 00:21:23.060 thread=1 00:21:23.060 invalidate=1 00:21:23.060 rw=randwrite 00:21:23.060 time_based=1 00:21:23.060 runtime=1 00:21:23.060 ioengine=libaio 00:21:23.060 direct=1 00:21:23.060 bs=4096 00:21:23.060 iodepth=1 00:21:23.060 norandommap=0 00:21:23.060 numjobs=1 00:21:23.060 00:21:23.060 verify_dump=1 00:21:23.060 verify_backlog=512 00:21:23.060 verify_state_save=0 00:21:23.060 do_verify=1 00:21:23.060 verify=crc32c-intel 00:21:23.060 [job0] 00:21:23.060 filename=/dev/nvme0n1 00:21:23.060 [job1] 00:21:23.060 filename=/dev/nvme0n2 00:21:23.060 [job2] 00:21:23.060 filename=/dev/nvme0n3 00:21:23.060 [job3] 00:21:23.060 filename=/dev/nvme0n4 00:21:23.060 Could not set queue depth (nvme0n1) 00:21:23.060 Could not set queue depth (nvme0n2) 00:21:23.060 Could not set queue depth (nvme0n3) 00:21:23.060 Could not set queue depth (nvme0n4) 00:21:23.060 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:23.060 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:23.060 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:23.060 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:23.060 fio-3.35 00:21:23.060 Starting 4 threads 00:21:23.995 00:21:23.995 job0: (groupid=0, jobs=1): err= 0: pid=65331: Wed Nov 20 07:18:48 2024 00:21:23.995 read: IOPS=4468, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1001msec) 00:21:23.995 slat (nsec): min=5117, max=90200, avg=6255.81, stdev=3112.75 00:21:23.995 clat (usec): min=90, max=435, avg=114.78, stdev=15.78 00:21:23.995 lat (usec): min=95, max=440, avg=121.03, stdev=16.60 00:21:23.995 clat percentiles (usec): 00:21:23.995 | 1.00th=[ 96], 5.00th=[ 100], 10.00th=[ 102], 20.00th=[ 105], 00:21:23.995 | 30.00th=[ 108], 40.00th=[ 111], 50.00th=[ 113], 60.00th=[ 116], 00:21:23.995 | 70.00th=[ 119], 80.00th=[ 124], 90.00th=[ 131], 95.00th=[ 137], 00:21:23.995 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 347], 99.95th=[ 392], 00:21:23.995 | 99.99th=[ 437] 00:21:23.995 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:21:23.995 slat (usec): min=6, max=157, avg=11.86, stdev= 7.63 00:21:23.995 clat (usec): min=25, max=394, avg=85.79, stdev=15.96 00:21:23.995 lat (usec): min=74, max=430, avg=97.65, stdev=19.34 00:21:23.995 clat percentiles (usec): 00:21:23.995 | 1.00th=[ 70], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:21:23.995 | 30.00th=[ 78], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 86], 00:21:23.995 | 70.00th=[ 89], 80.00th=[ 94], 90.00th=[ 102], 95.00th=[ 110], 00:21:23.995 | 99.00th=[ 135], 99.50th=[ 151], 99.90th=[ 262], 99.95th=[ 314], 00:21:23.995 | 99.99th=[ 396] 00:21:23.995 bw ( KiB/s): min=18984, max=18984, per=31.57%, avg=18984.00, stdev= 0.00, samples=1 00:21:23.995 iops : min= 4746, max= 4746, avg=4746.00, stdev= 0.00, samples=1 00:21:23.995 lat (usec) : 50=0.02%, 100=47.86%, 250=51.98%, 500=0.14% 00:21:23.995 cpu : usr=1.60%, sys=7.10%, ctx=9085, majf=0, minf=15 00:21:23.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.995 issued rwts: total=4473,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:23.995 job1: (groupid=0, jobs=1): err= 0: pid=65332: Wed Nov 20 07:18:48 2024 00:21:23.995 read: IOPS=2471, BW=9886KiB/s (10.1MB/s)(9896KiB/1001msec) 00:21:23.995 slat (nsec): min=5447, max=64415, avg=7362.09, stdev=3664.50 00:21:23.995 clat (usec): min=104, max=460, avg=227.97, stdev=37.73 00:21:23.995 lat (usec): min=110, max=466, avg=235.33, stdev=37.70 00:21:23.995 clat percentiles (usec): 00:21:23.995 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:21:23.995 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 223], 00:21:23.995 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:21:23.995 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 388], 99.95th=[ 400], 00:21:23.995 | 99.99th=[ 461] 00:21:23.995 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:21:23.995 slat (usec): min=8, max=117, avg=10.96, stdev= 5.44 00:21:23.995 clat (usec): min=67, max=3414, avg=150.33, stdev=131.61 00:21:23.995 lat (usec): min=77, max=3423, avg=161.29, stdev=132.44 00:21:23.995 clat percentiles (usec): 00:21:23.995 | 1.00th=[ 81], 5.00th=[ 87], 10.00th=[ 91], 20.00th=[ 100], 00:21:23.995 | 30.00th=[ 139], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:21:23.995 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 176], 95.00th=[ 208], 00:21:23.995 | 99.00th=[ 265], 99.50th=[ 318], 99.90th=[ 2769], 99.95th=[ 2769], 00:21:23.995 | 99.99th=[ 3425] 00:21:23.995 bw ( KiB/s): min=12000, max=12000, per=19.95%, avg=12000.00, stdev= 0.00, samples=1 00:21:23.995 iops : min= 3000, max= 3000, avg=3000.00, stdev= 0.00, samples=1 00:21:23.995 lat (usec) : 100=10.27%, 250=73.44%, 500=16.13%, 750=0.02% 00:21:23.995 lat (msec) : 2=0.04%, 4=0.10% 00:21:23.995 cpu : usr=0.60%, sys=4.20%, ctx=5035, majf=0, minf=19 00:21:23.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.995 issued rwts: total=2474,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:23.995 job2: (groupid=0, jobs=1): err= 0: pid=65333: Wed Nov 20 07:18:48 2024 00:21:23.995 read: IOPS=4879, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1001msec) 00:21:23.995 slat (nsec): min=5273, max=34381, avg=5915.04, stdev=937.78 00:21:23.995 clat (usec): min=60, max=1081, avg=103.36, stdev=16.67 00:21:23.995 lat (usec): min=89, max=1087, avg=109.28, stdev=16.68 00:21:23.995 clat percentiles (usec): 00:21:23.995 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 96], 00:21:23.995 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 104], 00:21:23.995 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 116], 95.00th=[ 121], 00:21:23.995 | 99.00th=[ 133], 99.50th=[ 137], 99.90th=[ 147], 99.95th=[ 147], 00:21:23.995 | 99.99th=[ 1090] 00:21:23.995 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:21:23.995 slat (nsec): min=7028, max=93697, avg=10089.73, stdev=3757.30 00:21:23.996 clat (usec): min=58, max=446, avg=79.40, stdev=13.46 00:21:23.996 lat (usec): min=67, max=475, avg=89.49, stdev=14.62 00:21:23.996 clat percentiles (usec): 00:21:23.996 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 73], 00:21:23.996 | 30.00th=[ 74], 40.00th=[ 76], 50.00th=[ 78], 60.00th=[ 80], 00:21:23.996 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 91], 95.00th=[ 97], 00:21:23.996 | 99.00th=[ 111], 99.50th=[ 117], 99.90th=[ 217], 99.95th=[ 383], 00:21:23.996 | 99.99th=[ 445] 00:21:23.996 bw ( KiB/s): min=20480, max=20480, per=34.06%, avg=20480.00, stdev= 0.00, samples=1 00:21:23.996 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:21:23.996 lat (usec) : 100=69.16%, 250=30.78%, 500=0.05% 00:21:23.996 lat (msec) : 2=0.01% 00:21:23.996 cpu : usr=1.30%, sys=7.00%, ctx=10006, majf=0, minf=7 00:21:23.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.996 issued rwts: total=4884,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:23.996 job3: (groupid=0, jobs=1): err= 0: pid=65334: Wed Nov 20 07:18:48 2024 00:21:23.996 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:21:23.996 slat (nsec): min=5285, max=27781, avg=5823.28, stdev=1247.42 00:21:23.996 clat (usec): min=104, max=487, avg=219.95, stdev=37.46 00:21:23.996 lat (usec): min=110, max=493, avg=225.77, stdev=37.53 00:21:23.996 clat percentiles (usec): 00:21:23.996 | 1.00th=[ 123], 5.00th=[ 178], 10.00th=[ 190], 20.00th=[ 196], 00:21:23.996 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:21:23.996 | 70.00th=[ 235], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 277], 00:21:23.996 | 99.00th=[ 314], 99.50th=[ 351], 99.90th=[ 469], 99.95th=[ 478], 00:21:23.996 | 99.99th=[ 486] 00:21:23.996 write: IOPS=2758, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec); 0 zone resets 00:21:23.996 slat (nsec): min=8074, max=51237, avg=10041.04, stdev=3530.70 00:21:23.996 clat (usec): min=75, max=1353, avg=141.16, stdev=40.20 00:21:23.996 lat (usec): min=83, max=1362, avg=151.20, stdev=40.52 00:21:23.996 clat percentiles (usec): 00:21:23.996 | 1.00th=[ 82], 5.00th=[ 92], 10.00th=[ 97], 20.00th=[ 103], 00:21:23.996 | 30.00th=[ 117], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 155], 00:21:23.996 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 180], 00:21:23.996 | 99.00th=[ 212], 99.50th=[ 241], 99.90th=[ 474], 99.95th=[ 486], 00:21:23.996 | 99.99th=[ 1352] 00:21:23.996 bw ( KiB/s): min=12288, max=12288, per=20.43%, avg=12288.00, stdev= 0.00, samples=1 00:21:23.996 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:21:23.996 lat (usec) : 100=8.02%, 250=80.72%, 500=11.24% 00:21:23.996 lat (msec) : 2=0.02% 00:21:23.996 cpu : usr=1.20%, sys=3.30%, ctx=5321, majf=0, minf=9 00:21:23.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.996 issued rwts: total=2560,2761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:23.996 00:21:23.996 Run status group 0 (all jobs): 00:21:23.996 READ: bw=56.2MiB/s (58.9MB/s), 9886KiB/s-19.1MiB/s (10.1MB/s-20.0MB/s), io=56.2MiB (58.9MB), run=1001-1001msec 00:21:23.996 WRITE: bw=58.7MiB/s (61.6MB/s), 9.99MiB/s-20.0MiB/s (10.5MB/s-20.9MB/s), io=58.8MiB (61.6MB), run=1001-1001msec 00:21:23.996 00:21:23.996 Disk stats (read/write): 00:21:23.996 nvme0n1: ios=3946/4096, merge=0/0, ticks=500/373, in_queue=873, util=90.58% 00:21:23.996 nvme0n2: ios=2097/2422, merge=0/0, ticks=517/367, in_queue=884, util=90.24% 00:21:23.996 nvme0n3: ios=4271/4608, merge=0/0, ticks=449/380, in_queue=829, util=90.16% 00:21:23.996 nvme0n4: ios=2186/2560, merge=0/0, ticks=499/373, in_queue=872, util=90.34% 00:21:24.254 07:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:21:24.254 [global] 00:21:24.254 thread=1 00:21:24.254 invalidate=1 00:21:24.254 rw=write 00:21:24.254 time_based=1 00:21:24.254 runtime=1 00:21:24.254 ioengine=libaio 00:21:24.254 direct=1 00:21:24.254 bs=4096 00:21:24.254 iodepth=128 00:21:24.254 norandommap=0 00:21:24.254 numjobs=1 00:21:24.254 00:21:24.254 verify_dump=1 00:21:24.254 verify_backlog=512 00:21:24.254 verify_state_save=0 00:21:24.254 do_verify=1 00:21:24.254 verify=crc32c-intel 00:21:24.254 [job0] 00:21:24.254 filename=/dev/nvme0n1 00:21:24.254 [job1] 00:21:24.254 filename=/dev/nvme0n2 00:21:24.254 [job2] 00:21:24.254 filename=/dev/nvme0n3 00:21:24.254 [job3] 00:21:24.254 filename=/dev/nvme0n4 00:21:24.254 Could not set queue depth (nvme0n1) 00:21:24.254 Could not set queue depth (nvme0n2) 00:21:24.254 Could not set queue depth (nvme0n3) 00:21:24.254 Could not set queue depth (nvme0n4) 00:21:24.254 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:24.254 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:24.254 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:24.254 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:24.254 fio-3.35 00:21:24.254 Starting 4 threads 00:21:25.627 00:21:25.627 job0: (groupid=0, jobs=1): err= 0: pid=65387: Wed Nov 20 07:18:49 2024 00:21:25.627 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:21:25.627 slat (usec): min=3, max=3230, avg=73.15, stdev=294.31 00:21:25.627 clat (usec): min=6410, max=13936, avg=9255.08, stdev=1276.58 00:21:25.627 lat (usec): min=6469, max=13950, avg=9328.23, stdev=1304.17 00:21:25.627 clat percentiles (usec): 00:21:25.627 | 1.00th=[ 6783], 5.00th=[ 7504], 10.00th=[ 8029], 20.00th=[ 8291], 00:21:25.627 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 9372], 00:21:25.627 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10814], 95.00th=[11469], 00:21:25.627 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13435], 99.95th=[13566], 00:21:25.627 | 99.99th=[13960] 00:21:25.627 write: IOPS=7052, BW=27.5MiB/s (28.9MB/s)(27.6MiB/1003msec); 0 zone resets 00:21:25.627 slat (usec): min=5, max=3036, avg=68.06, stdev=236.63 00:21:25.627 clat (usec): min=1990, max=13619, avg=9217.52, stdev=1414.92 00:21:25.627 lat (usec): min=2494, max=13632, avg=9285.58, stdev=1429.00 00:21:25.627 clat percentiles (usec): 00:21:25.627 | 1.00th=[ 6325], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8160], 00:21:25.627 | 30.00th=[ 8356], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 9503], 00:21:25.627 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11207], 00:21:25.627 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13566], 99.95th=[13566], 00:21:25.627 | 99.99th=[13566] 00:21:25.627 bw ( KiB/s): min=24625, max=31000, per=25.91%, avg=27812.50, stdev=4507.81, samples=2 00:21:25.627 iops : min= 6156, max= 7750, avg=6953.00, stdev=1127.13, samples=2 00:21:25.627 lat (msec) : 2=0.01%, 4=0.20%, 10=66.33%, 20=33.47% 00:21:25.627 cpu : usr=3.29%, sys=11.68%, ctx=931, majf=0, minf=9 00:21:25.627 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:25.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:25.628 issued rwts: total=6656,7074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:25.628 job1: (groupid=0, jobs=1): err= 0: pid=65388: Wed Nov 20 07:18:49 2024 00:21:25.628 read: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec) 00:21:25.628 slat (usec): min=4, max=4614, avg=61.79, stdev=392.42 00:21:25.628 clat (usec): min=5170, max=16269, avg=8616.77, stdev=929.21 00:21:25.628 lat (usec): min=5178, max=18992, avg=8678.56, stdev=943.86 00:21:25.628 clat percentiles (usec): 00:21:25.628 | 1.00th=[ 5473], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8356], 00:21:25.628 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8586], 60.00th=[ 8717], 00:21:25.628 | 70.00th=[ 8848], 80.00th=[ 8979], 90.00th=[ 9110], 95.00th=[ 9372], 00:21:25.628 | 99.00th=[13042], 99.50th=[13304], 99.90th=[16188], 99.95th=[16188], 00:21:25.628 | 99.99th=[16319] 00:21:25.628 write: IOPS=7789, BW=30.4MiB/s (31.9MB/s)(30.5MiB/1002msec); 0 zone resets 00:21:25.628 slat (usec): min=3, max=5760, avg=63.11, stdev=380.80 00:21:25.628 clat (usec): min=506, max=13398, avg=7798.88, stdev=838.00 00:21:25.628 lat (usec): min=2812, max=13567, avg=7861.99, stdev=770.83 00:21:25.628 clat percentiles (usec): 00:21:25.628 | 1.00th=[ 4686], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7308], 00:21:25.628 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:21:25.628 | 70.00th=[ 8094], 80.00th=[ 8160], 90.00th=[ 8291], 95.00th=[ 8455], 00:21:25.628 | 99.00th=[10421], 99.50th=[10552], 99.90th=[12256], 99.95th=[13304], 00:21:25.628 | 99.99th=[13435] 00:21:25.628 bw ( KiB/s): min=28720, max=32768, per=28.64%, avg=30744.00, stdev=2862.37, samples=2 00:21:25.628 iops : min= 7180, max= 8192, avg=7686.00, stdev=715.59, samples=2 00:21:25.628 lat (usec) : 750=0.01% 00:21:25.628 lat (msec) : 4=0.32%, 10=97.38%, 20=2.30% 00:21:25.628 cpu : usr=3.20%, sys=12.59%, ctx=333, majf=0, minf=6 00:21:25.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:25.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:25.628 issued rwts: total=7680,7805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:25.628 job2: (groupid=0, jobs=1): err= 0: pid=65389: Wed Nov 20 07:18:49 2024 00:21:25.628 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:21:25.628 slat (usec): min=3, max=3420, avg=78.77, stdev=322.05 00:21:25.628 clat (usec): min=7289, max=13332, avg=10036.12, stdev=865.61 00:21:25.628 lat (usec): min=7302, max=13344, avg=10114.89, stdev=902.28 00:21:25.628 clat percentiles (usec): 00:21:25.628 | 1.00th=[ 7832], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9634], 00:21:25.628 | 30.00th=[ 9896], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:21:25.628 | 70.00th=[10159], 80.00th=[10421], 90.00th=[11338], 95.00th=[11731], 00:21:25.628 | 99.00th=[12387], 99.50th=[12518], 99.90th=[13042], 99.95th=[13173], 00:21:25.628 | 99.99th=[13304] 00:21:25.628 write: IOPS=6606, BW=25.8MiB/s (27.1MB/s)(25.9MiB/1003msec); 0 zone resets 00:21:25.628 slat (usec): min=5, max=2852, avg=73.63, stdev=285.39 00:21:25.628 clat (usec): min=2309, max=13174, avg=9843.30, stdev=1007.07 00:21:25.628 lat (usec): min=2940, max=13185, avg=9916.94, stdev=1030.32 00:21:25.628 clat percentiles (usec): 00:21:25.628 | 1.00th=[ 7111], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9241], 00:21:25.628 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:21:25.628 | 70.00th=[10028], 80.00th=[10159], 90.00th=[11076], 95.00th=[11863], 00:21:25.628 | 99.00th=[12649], 99.50th=[12780], 99.90th=[12911], 99.95th=[13042], 00:21:25.628 | 99.99th=[13173] 00:21:25.628 bw ( KiB/s): min=25728, max=26316, per=24.24%, avg=26022.00, stdev=415.78, samples=2 00:21:25.628 iops : min= 6432, max= 6579, avg=6505.50, stdev=103.94, samples=2 00:21:25.628 lat (msec) : 4=0.13%, 10=54.79%, 20=45.07% 00:21:25.628 cpu : usr=2.99%, sys=10.48%, ctx=845, majf=0, minf=5 00:21:25.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:25.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:25.628 issued rwts: total=6144,6626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:25.628 job3: (groupid=0, jobs=1): err= 0: pid=65390: Wed Nov 20 07:18:49 2024 00:21:25.628 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:21:25.628 slat (usec): min=2, max=3313, avg=93.20, stdev=470.41 00:21:25.628 clat (usec): min=8933, max=13405, avg=12251.05, stdev=562.87 00:21:25.628 lat (usec): min=11391, max=13416, avg=12344.25, stdev=317.11 00:21:25.628 clat percentiles (usec): 00:21:25.628 | 1.00th=[ 9503], 5.00th=[11731], 10.00th=[11863], 20.00th=[12125], 00:21:25.628 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12256], 60.00th=[12387], 00:21:25.628 | 70.00th=[12518], 80.00th=[12518], 90.00th=[12780], 95.00th=[12911], 00:21:25.628 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13435], 99.95th=[13435], 00:21:25.628 | 99.99th=[13435] 00:21:25.628 write: IOPS=5392, BW=21.1MiB/s (22.1MB/s)(21.1MiB/1003msec); 0 zone resets 00:21:25.628 slat (usec): min=6, max=7892, avg=92.54, stdev=442.34 00:21:25.628 clat (usec): min=325, max=20572, avg=11727.38, stdev=1315.88 00:21:25.628 lat (usec): min=2951, max=20588, avg=11819.92, stdev=1247.10 00:21:25.628 clat percentiles (usec): 00:21:25.628 | 1.00th=[ 6063], 5.00th=[11076], 10.00th=[11338], 20.00th=[11469], 00:21:25.628 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:21:25.628 | 70.00th=[11994], 80.00th=[11994], 90.00th=[12256], 95.00th=[12387], 00:21:25.628 | 99.00th=[19006], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:21:25.628 | 99.99th=[20579] 00:21:25.628 bw ( KiB/s): min=20480, max=21768, per=19.68%, avg=21124.00, stdev=910.75, samples=2 00:21:25.628 iops : min= 5120, max= 5442, avg=5281.00, stdev=227.69, samples=2 00:21:25.628 lat (usec) : 500=0.01% 00:21:25.628 lat (msec) : 4=0.30%, 10=3.51%, 20=95.88%, 50=0.29% 00:21:25.628 cpu : usr=1.80%, sys=9.78%, ctx=338, majf=0, minf=13 00:21:25.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:25.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:25.628 issued rwts: total=5120,5409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:25.628 00:21:25.628 Run status group 0 (all jobs): 00:21:25.628 READ: bw=99.7MiB/s (105MB/s), 19.9MiB/s-29.9MiB/s (20.9MB/s-31.4MB/s), io=100MiB (105MB), run=1002-1003msec 00:21:25.628 WRITE: bw=105MiB/s (110MB/s), 21.1MiB/s-30.4MiB/s (22.1MB/s-31.9MB/s), io=105MiB (110MB), run=1002-1003msec 00:21:25.628 00:21:25.628 Disk stats (read/write): 00:21:25.628 nvme0n1: ios=5831/6144, merge=0/0, ticks=17825/17154, in_queue=34979, util=89.68% 00:21:25.628 nvme0n2: ios=6705/6910, merge=0/0, ticks=54513/50484, in_queue=104997, util=90.14% 00:21:25.628 nvme0n3: ios=5672/5647, merge=0/0, ticks=18522/16315, in_queue=34837, util=90.47% 00:21:25.628 nvme0n4: ios=4637/4640, merge=0/0, ticks=13395/12667, in_queue=26062, util=90.54% 00:21:25.628 07:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:21:25.628 [global] 00:21:25.628 thread=1 00:21:25.628 invalidate=1 00:21:25.628 rw=randwrite 00:21:25.628 time_based=1 00:21:25.628 runtime=1 00:21:25.628 ioengine=libaio 00:21:25.628 direct=1 00:21:25.628 bs=4096 00:21:25.628 iodepth=128 00:21:25.628 norandommap=0 00:21:25.628 numjobs=1 00:21:25.628 00:21:25.628 verify_dump=1 00:21:25.628 verify_backlog=512 00:21:25.628 verify_state_save=0 00:21:25.628 do_verify=1 00:21:25.628 verify=crc32c-intel 00:21:25.628 [job0] 00:21:25.628 filename=/dev/nvme0n1 00:21:25.628 [job1] 00:21:25.628 filename=/dev/nvme0n2 00:21:25.628 [job2] 00:21:25.628 filename=/dev/nvme0n3 00:21:25.628 [job3] 00:21:25.628 filename=/dev/nvme0n4 00:21:25.628 Could not set queue depth (nvme0n1) 00:21:25.628 Could not set queue depth (nvme0n2) 00:21:25.628 Could not set queue depth (nvme0n3) 00:21:25.628 Could not set queue depth (nvme0n4) 00:21:25.628 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:25.628 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:25.628 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:25.628 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:25.628 fio-3.35 00:21:25.628 Starting 4 threads 00:21:27.003 00:21:27.003 job0: (groupid=0, jobs=1): err= 0: pid=65456: Wed Nov 20 07:18:50 2024 00:21:27.003 read: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec) 00:21:27.003 slat (usec): min=2, max=4098, avg=54.04, stdev=334.47 00:21:27.003 clat (usec): min=4448, max=12109, avg=7552.70, stdev=799.38 00:21:27.003 lat (usec): min=4460, max=14417, avg=7606.74, stdev=817.46 00:21:27.003 clat percentiles (usec): 00:21:27.003 | 1.00th=[ 4752], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7242], 00:21:27.003 | 30.00th=[ 7373], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7635], 00:21:27.003 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8094], 95.00th=[ 8225], 00:21:27.003 | 99.00th=[11338], 99.50th=[11600], 99.90th=[11863], 99.95th=[11994], 00:21:27.003 | 99.99th=[12125] 00:21:27.003 write: IOPS=9058, BW=35.4MiB/s (37.1MB/s)(35.5MiB/1003msec); 0 zone resets 00:21:27.003 slat (usec): min=6, max=3945, avg=54.55, stdev=321.13 00:21:27.003 clat (usec): min=318, max=9414, avg=6749.05, stdev=737.41 00:21:27.003 lat (usec): min=2847, max=9673, avg=6803.60, stdev=681.92 00:21:27.003 clat percentiles (usec): 00:21:27.003 | 1.00th=[ 3982], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6325], 00:21:27.003 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 6980], 00:21:27.003 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7308], 95.00th=[ 7701], 00:21:27.003 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[ 9372], 99.95th=[ 9372], 00:21:27.003 | 99.99th=[ 9372] 00:21:27.003 bw ( KiB/s): min=34808, max=36856, per=38.40%, avg=35832.00, stdev=1448.15, samples=2 00:21:27.003 iops : min= 8702, max= 9214, avg=8958.00, stdev=362.04, samples=2 00:21:27.003 lat (usec) : 500=0.01% 00:21:27.003 lat (msec) : 4=0.51%, 10=98.62%, 20=0.87% 00:21:27.003 cpu : usr=3.69%, sys=13.27%, ctx=382, majf=0, minf=9 00:21:27.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:27.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:27.004 issued rwts: total=8704,9086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:27.004 job1: (groupid=0, jobs=1): err= 0: pid=65457: Wed Nov 20 07:18:50 2024 00:21:27.004 read: IOPS=2640, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1011msec) 00:21:27.004 slat (usec): min=3, max=15196, avg=172.41, stdev=1268.45 00:21:27.004 clat (usec): min=7526, max=45346, avg=21760.40, stdev=3308.65 00:21:27.004 lat (usec): min=11029, max=45354, avg=21932.81, stdev=3460.02 00:21:27.004 clat percentiles (usec): 00:21:27.004 | 1.00th=[15926], 5.00th=[17695], 10.00th=[19268], 20.00th=[20055], 00:21:27.004 | 30.00th=[20579], 40.00th=[20841], 50.00th=[20841], 60.00th=[21103], 00:21:27.004 | 70.00th=[21627], 80.00th=[24773], 90.00th=[25560], 95.00th=[25822], 00:21:27.004 | 99.00th=[34341], 99.50th=[38011], 99.90th=[45351], 99.95th=[45351], 00:21:27.004 | 99.99th=[45351] 00:21:27.004 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:21:27.004 slat (usec): min=3, max=12039, avg=172.21, stdev=1033.03 00:21:27.004 clat (usec): min=5308, max=68926, avg=22821.31, stdev=12196.34 00:21:27.004 lat (usec): min=5315, max=68936, avg=22993.52, stdev=12235.65 00:21:27.004 clat percentiles (usec): 00:21:27.004 | 1.00th=[ 8455], 5.00th=[12518], 10.00th=[15139], 20.00th=[17695], 00:21:27.004 | 30.00th=[18482], 40.00th=[18744], 50.00th=[18744], 60.00th=[19006], 00:21:27.004 | 70.00th=[19530], 80.00th=[21627], 90.00th=[44827], 95.00th=[55837], 00:21:27.004 | 99.00th=[64750], 99.50th=[65799], 99.90th=[68682], 99.95th=[68682], 00:21:27.004 | 99.99th=[68682] 00:21:27.004 bw ( KiB/s): min=11576, max=12856, per=13.09%, avg=12216.00, stdev=905.10, samples=2 00:21:27.004 iops : min= 2894, max= 3214, avg=3054.00, stdev=226.27, samples=2 00:21:27.004 lat (msec) : 10=1.95%, 20=44.37%, 50=49.49%, 100=4.18% 00:21:27.004 cpu : usr=1.68%, sys=4.95%, ctx=184, majf=0, minf=7 00:21:27.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:27.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:27.004 issued rwts: total=2670,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:27.004 job2: (groupid=0, jobs=1): err= 0: pid=65458: Wed Nov 20 07:18:50 2024 00:21:27.004 read: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec) 00:21:27.004 slat (usec): min=3, max=5117, avg=63.41, stdev=396.65 00:21:27.004 clat (usec): min=5141, max=13780, avg=8599.49, stdev=937.45 00:21:27.004 lat (usec): min=5149, max=16434, avg=8662.90, stdev=947.80 00:21:27.004 clat percentiles (usec): 00:21:27.004 | 1.00th=[ 5407], 5.00th=[ 7504], 10.00th=[ 8094], 20.00th=[ 8291], 00:21:27.004 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8586], 60.00th=[ 8717], 00:21:27.004 | 70.00th=[ 8848], 80.00th=[ 8848], 90.00th=[ 9110], 95.00th=[ 9503], 00:21:27.004 | 99.00th=[12911], 99.50th=[13435], 99.90th=[13698], 99.95th=[13698], 00:21:27.004 | 99.99th=[13829] 00:21:27.004 write: IOPS=7827, BW=30.6MiB/s (32.1MB/s)(30.6MiB/1002msec); 0 zone resets 00:21:27.004 slat (usec): min=4, max=5272, avg=61.32, stdev=380.26 00:21:27.004 clat (usec): min=1818, max=11920, avg=7778.14, stdev=908.59 00:21:27.004 lat (usec): min=1829, max=11939, avg=7839.46, stdev=848.63 00:21:27.004 clat percentiles (usec): 00:21:27.004 | 1.00th=[ 4621], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 7308], 00:21:27.004 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:21:27.004 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8848], 00:21:27.004 | 99.00th=[10290], 99.50th=[10421], 99.90th=[10683], 99.95th=[10683], 00:21:27.004 | 99.99th=[11863] 00:21:27.004 bw ( KiB/s): min=29032, max=32694, per=33.07%, avg=30863.00, stdev=2589.43, samples=2 00:21:27.004 iops : min= 7258, max= 8173, avg=7715.50, stdev=647.00, samples=2 00:21:27.004 lat (msec) : 2=0.10%, 4=0.06%, 10=96.26%, 20=3.58% 00:21:27.004 cpu : usr=3.80%, sys=11.39%, ctx=343, majf=0, minf=16 00:21:27.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:27.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:27.004 issued rwts: total=7680,7843,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:27.004 job3: (groupid=0, jobs=1): err= 0: pid=65459: Wed Nov 20 07:18:50 2024 00:21:27.004 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:21:27.004 slat (usec): min=6, max=13453, avg=137.35, stdev=929.15 00:21:27.004 clat (usec): min=1014, max=38993, avg=18963.69, stdev=3912.55 00:21:27.004 lat (usec): min=9352, max=43796, avg=19101.04, stdev=3926.65 00:21:27.004 clat percentiles (usec): 00:21:27.004 | 1.00th=[10028], 5.00th=[13829], 10.00th=[14091], 20.00th=[14484], 00:21:27.004 | 30.00th=[15664], 40.00th=[20055], 50.00th=[20317], 60.00th=[20579], 00:21:27.004 | 70.00th=[20841], 80.00th=[21365], 90.00th=[22676], 95.00th=[23987], 00:21:27.004 | 99.00th=[26346], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:21:27.004 | 99.99th=[39060] 00:21:27.004 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:21:27.004 slat (usec): min=5, max=18251, avg=139.38, stdev=975.26 00:21:27.004 clat (usec): min=5836, max=29828, avg=16778.51, stdev=4183.74 00:21:27.004 lat (usec): min=8368, max=29993, avg=16917.89, stdev=4124.84 00:21:27.004 clat percentiles (usec): 00:21:27.004 | 1.00th=[10028], 5.00th=[10290], 10.00th=[10814], 20.00th=[11338], 00:21:27.004 | 30.00th=[15795], 40.00th=[17957], 50.00th=[18482], 60.00th=[18744], 00:21:27.004 | 70.00th=[19006], 80.00th=[19268], 90.00th=[20055], 95.00th=[22152], 00:21:27.004 | 99.00th=[28443], 99.50th=[28705], 99.90th=[29754], 99.95th=[29754], 00:21:27.004 | 99.99th=[29754] 00:21:27.004 bw ( KiB/s): min=12344, max=16328, per=15.36%, avg=14336.00, stdev=2817.11, samples=2 00:21:27.004 iops : min= 3086, max= 4082, avg=3584.00, stdev=704.28, samples=2 00:21:27.004 lat (msec) : 2=0.01%, 10=0.91%, 20=63.04%, 50=36.04% 00:21:27.004 cpu : usr=1.68%, sys=6.74%, ctx=156, majf=0, minf=9 00:21:27.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:21:27.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:27.004 issued rwts: total=3577,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:27.004 00:21:27.004 Run status group 0 (all jobs): 00:21:27.004 READ: bw=87.4MiB/s (91.7MB/s), 10.3MiB/s-33.9MiB/s (10.8MB/s-35.5MB/s), io=88.4MiB (92.7MB), run=1002-1011msec 00:21:27.004 WRITE: bw=91.1MiB/s (95.6MB/s), 11.9MiB/s-35.4MiB/s (12.4MB/s-37.1MB/s), io=92.1MiB (96.6MB), run=1002-1011msec 00:21:27.004 00:21:27.004 Disk stats (read/write): 00:21:27.004 nvme0n1: ios=7730/7871, merge=0/0, ticks=54915/49396, in_queue=104311, util=89.78% 00:21:27.004 nvme0n2: ios=2609/2743, merge=0/0, ticks=53997/52021, in_queue=106018, util=90.34% 00:21:27.004 nvme0n3: ios=6703/6924, merge=0/0, ticks=54204/50483, in_queue=104687, util=90.47% 00:21:27.004 nvme0n4: ios=2792/3072, merge=0/0, ticks=54650/52387, in_queue=107037, util=90.44% 00:21:27.004 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:21:27.004 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=65473 00:21:27.004 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:21:27.004 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:21:27.004 [global] 00:21:27.004 thread=1 00:21:27.004 invalidate=1 00:21:27.004 rw=read 00:21:27.004 time_based=1 00:21:27.004 runtime=10 00:21:27.004 ioengine=libaio 00:21:27.004 direct=1 00:21:27.004 bs=4096 00:21:27.004 iodepth=1 00:21:27.004 norandommap=1 00:21:27.004 numjobs=1 00:21:27.004 00:21:27.004 [job0] 00:21:27.004 filename=/dev/nvme0n1 00:21:27.004 [job1] 00:21:27.004 filename=/dev/nvme0n2 00:21:27.004 [job2] 00:21:27.004 filename=/dev/nvme0n3 00:21:27.004 [job3] 00:21:27.004 filename=/dev/nvme0n4 00:21:27.004 Could not set queue depth (nvme0n1) 00:21:27.004 Could not set queue depth (nvme0n2) 00:21:27.004 Could not set queue depth (nvme0n3) 00:21:27.004 Could not set queue depth (nvme0n4) 00:21:27.004 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:27.004 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:27.004 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:27.004 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:27.004 fio-3.35 00:21:27.004 Starting 4 threads 00:21:30.312 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:30.312 fio: pid=65516, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:21:30.312 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=70873088, buflen=4096 00:21:30.312 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:30.312 fio: pid=65515, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:21:30.312 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=90767360, buflen=4096 00:21:30.312 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:30.312 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:30.312 fio: pid=65513, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:21:30.312 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=59703296, buflen=4096 00:21:30.570 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:30.570 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:30.570 fio: pid=65514, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:21:30.570 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=28008448, buflen=4096 00:21:30.570 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:30.570 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:30.570 00:21:30.570 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65513: Wed Nov 20 07:18:54 2024 00:21:30.570 read: IOPS=9317, BW=36.4MiB/s (38.2MB/s)(121MiB/3323msec) 00:21:30.570 slat (usec): min=3, max=13774, avg= 8.30, stdev=138.98 00:21:30.570 clat (usec): min=32, max=1590, avg=98.45, stdev=22.90 00:21:30.570 lat (usec): min=74, max=13907, avg=106.75, stdev=141.32 00:21:30.570 clat percentiles (usec): 00:21:30.570 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 88], 00:21:30.570 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 98], 00:21:30.570 | 70.00th=[ 101], 80.00th=[ 104], 90.00th=[ 113], 95.00th=[ 126], 00:21:30.570 | 99.00th=[ 163], 99.50th=[ 172], 99.90th=[ 281], 99.95th=[ 351], 00:21:30.570 | 99.99th=[ 914] 00:21:30.570 bw ( KiB/s): min=35776, max=39832, per=36.17%, avg=38222.67, stdev=1344.91, samples=6 00:21:30.570 iops : min= 8944, max= 9958, avg=9555.67, stdev=336.23, samples=6 00:21:30.570 lat (usec) : 50=0.01%, 100=68.70%, 250=31.15%, 500=0.12%, 750=0.01% 00:21:30.570 lat (usec) : 1000=0.01% 00:21:30.570 lat (msec) : 2=0.01% 00:21:30.570 cpu : usr=0.93%, sys=6.26%, ctx=30968, majf=0, minf=1 00:21:30.570 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:30.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.570 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.570 issued rwts: total=30961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.570 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:30.570 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65514: Wed Nov 20 07:18:54 2024 00:21:30.570 read: IOPS=6550, BW=25.6MiB/s (26.8MB/s)(90.7MiB/3545msec) 00:21:30.570 slat (usec): min=3, max=11648, avg= 8.77, stdev=146.08 00:21:30.570 clat (usec): min=61, max=1762, avg=143.20, stdev=38.31 00:21:30.570 lat (usec): min=73, max=11763, avg=151.97, stdev=150.84 00:21:30.570 clat percentiles (usec): 00:21:30.570 | 1.00th=[ 80], 5.00th=[ 88], 10.00th=[ 94], 20.00th=[ 110], 00:21:30.570 | 30.00th=[ 139], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:21:30.570 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 182], 00:21:30.570 | 99.00th=[ 206], 99.50th=[ 243], 99.90th=[ 445], 99.95th=[ 553], 00:21:30.570 | 99.99th=[ 1516] 00:21:30.570 bw ( KiB/s): min=24136, max=35448, per=24.75%, avg=26150.50, stdev=4556.93, samples=6 00:21:30.570 iops : min= 6034, max= 8862, avg=6537.50, stdev=1139.29, samples=6 00:21:30.570 lat (usec) : 100=14.91%, 250=84.62%, 500=0.40%, 750=0.04% 00:21:30.570 lat (msec) : 2=0.02% 00:21:30.570 cpu : usr=0.65%, sys=4.37%, ctx=23241, majf=0, minf=1 00:21:30.570 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:30.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.570 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.570 issued rwts: total=23223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.570 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:30.570 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65515: Wed Nov 20 07:18:54 2024 00:21:30.570 read: IOPS=7107, BW=27.8MiB/s (29.1MB/s)(86.6MiB/3118msec) 00:21:30.570 slat (usec): min=3, max=14861, avg= 7.31, stdev=126.13 00:21:30.570 clat (usec): min=82, max=3568, avg=132.79, stdev=39.17 00:21:30.570 lat (usec): min=94, max=14996, avg=140.10, stdev=132.10 00:21:30.570 clat percentiles (usec): 00:21:30.570 | 1.00th=[ 105], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 116], 00:21:30.570 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 130], 00:21:30.570 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 163], 95.00th=[ 192], 00:21:30.570 | 99.00th=[ 212], 99.50th=[ 227], 99.90th=[ 359], 99.95th=[ 469], 00:21:30.570 | 99.99th=[ 1467] 00:21:30.570 bw ( KiB/s): min=20416, max=30808, per=27.01%, avg=28540.50, stdev=4009.63, samples=6 00:21:30.570 iops : min= 5104, max= 7702, avg=7135.00, stdev=1002.34, samples=6 00:21:30.570 lat (usec) : 100=0.15%, 250=99.50%, 500=0.30%, 750=0.01%, 1000=0.01% 00:21:30.570 lat (msec) : 2=0.02%, 4=0.01% 00:21:30.570 cpu : usr=0.74%, sys=4.52%, ctx=22163, majf=0, minf=2 00:21:30.570 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:30.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.570 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.570 issued rwts: total=22161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.570 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:30.571 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65516: Wed Nov 20 07:18:54 2024 00:21:30.571 read: IOPS=5923, BW=23.1MiB/s (24.3MB/s)(67.6MiB/2921msec) 00:21:30.571 slat (nsec): min=3978, max=67295, avg=5636.23, stdev=2011.65 00:21:30.571 clat (usec): min=85, max=1426, avg=162.60, stdev=26.09 00:21:30.571 lat (usec): min=91, max=1432, avg=168.24, stdev=26.27 00:21:30.571 clat percentiles (usec): 00:21:30.571 | 1.00th=[ 124], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:21:30.571 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:21:30.571 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 200], 00:21:30.571 | 99.00th=[ 219], 99.50th=[ 229], 99.90th=[ 334], 99.95th=[ 465], 00:21:30.571 | 99.99th=[ 1237] 00:21:30.571 bw ( KiB/s): min=23688, max=24568, per=22.99%, avg=24292.60, stdev=346.60, samples=5 00:21:30.571 iops : min= 5922, max= 6142, avg=6073.00, stdev=86.57, samples=5 00:21:30.571 lat (usec) : 100=0.30%, 250=99.40%, 500=0.25%, 750=0.01%, 1000=0.02% 00:21:30.571 lat (msec) : 2=0.01% 00:21:30.571 cpu : usr=0.62%, sys=3.36%, ctx=17304, majf=0, minf=2 00:21:30.571 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:30.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.571 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.571 issued rwts: total=17304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.571 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:30.571 00:21:30.571 Run status group 0 (all jobs): 00:21:30.571 READ: bw=103MiB/s (108MB/s), 23.1MiB/s-36.4MiB/s (24.3MB/s-38.2MB/s), io=366MiB (384MB), run=2921-3545msec 00:21:30.571 00:21:30.571 Disk stats (read/write): 00:21:30.571 nvme0n1: ios=29804/0, merge=0/0, ticks=2913/0, in_queue=2913, util=95.53% 00:21:30.571 nvme0n2: ios=21911/0, merge=0/0, ticks=3175/0, in_queue=3175, util=95.53% 00:21:30.571 nvme0n3: ios=20873/0, merge=0/0, ticks=2741/0, in_queue=2741, util=96.65% 00:21:30.571 nvme0n4: ios=17129/0, merge=0/0, ticks=2779/0, in_queue=2779, util=96.75% 00:21:30.903 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:30.903 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:31.161 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:31.161 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:31.161 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:31.161 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:31.420 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:31.420 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 65473 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:31.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:31.678 nvmf hotplug test: fio failed as expected 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:31.678 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:31.936 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:31.936 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:31.936 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:31.936 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:31.936 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:21:31.936 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:31.936 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:21:31.936 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:31.936 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:21:31.937 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:31.937 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:31.937 rmmod nvme_tcp 00:21:31.937 rmmod nvme_fabrics 00:21:31.937 rmmod nvme_keyring 00:21:31.937 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 65100 ']' 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 65100 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 65100 ']' 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 65100 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65100 00:21:32.195 killing process with pid 65100 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65100' 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 65100 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 65100 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:21:32.195 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # continue 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # continue 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:21:32.455 ************************************ 00:21:32.455 END TEST nvmf_fio_target 00:21:32.455 ************************************ 00:21:32.455 00:21:32.455 real 0m17.612s 00:21:32.455 user 1m6.645s 00:21:32.455 sys 0m7.784s 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:21:32.455 ************************************ 00:21:32.455 START TEST nvmf_bdevio 00:21:32.455 ************************************ 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:32.455 * Looking for test storage... 00:21:32.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:32.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.455 --rc genhtml_branch_coverage=1 00:21:32.455 --rc genhtml_function_coverage=1 00:21:32.455 --rc genhtml_legend=1 00:21:32.455 --rc geninfo_all_blocks=1 00:21:32.455 --rc geninfo_unexecuted_blocks=1 00:21:32.455 00:21:32.455 ' 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:32.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.455 --rc genhtml_branch_coverage=1 00:21:32.455 --rc genhtml_function_coverage=1 00:21:32.455 --rc genhtml_legend=1 00:21:32.455 --rc geninfo_all_blocks=1 00:21:32.455 --rc geninfo_unexecuted_blocks=1 00:21:32.455 00:21:32.455 ' 00:21:32.455 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:32.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.455 --rc genhtml_branch_coverage=1 00:21:32.456 --rc genhtml_function_coverage=1 00:21:32.456 --rc genhtml_legend=1 00:21:32.456 --rc geninfo_all_blocks=1 00:21:32.456 --rc geninfo_unexecuted_blocks=1 00:21:32.456 00:21:32.456 ' 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:32.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.456 --rc genhtml_branch_coverage=1 00:21:32.456 --rc genhtml_function_coverage=1 00:21:32.456 --rc genhtml_legend=1 00:21:32.456 --rc geninfo_all_blocks=1 00:21:32.456 --rc geninfo_unexecuted_blocks=1 00:21:32.456 00:21:32.456 ' 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:32.456 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@280 -- # nvmf_veth_init 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@223 -- # create_target_ns 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:32.456 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # create_main_bridge 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@105 -- # delete_main_bridge 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up initiator0 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:32.716 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up target0 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0 up 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up target0_br 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns target0 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:21:32.717 10.0.0.1 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:21:32.717 10.0.0.2 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up initiator0 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:21:32.717 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up target0_br 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up initiator1 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up target1 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1 up 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up target1_br 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns target1 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772163 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:21:32.718 10.0.0.3 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772164 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:21:32.718 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:21:32.719 10.0.0.4 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up initiator1 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up target1_br 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 2 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator0 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:32.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:21:32.719 00:21:32.719 --- 10.0.0.1 ping statistics --- 00:21:32.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.719 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target0 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target0 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:21:32.719 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:21:32.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:21:32.720 00:21:32.720 --- 10.0.0.2 ping statistics --- 00:21:32.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.720 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator1 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:21:32.720 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:21:32.720 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:32.720 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:21:32.720 00:21:32.720 --- 10.0.0.3 ping statistics --- 00:21:32.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.720 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target1 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target1 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:21:32.978 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:21:32.979 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:32.979 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:21:32.979 00:21:32.979 --- 10.0.0.4 ping statistics --- 00:21:32.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.979 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # return 0 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator0 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target0 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target0 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target1 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:21:32.979 ' 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=65829 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 65829 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 65829 ']' 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.979 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:32.979 [2024-11-20 07:18:57.026288] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:32.979 [2024-11-20 07:18:57.026344] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.979 [2024-11-20 07:18:57.164451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.268 [2024-11-20 07:18:57.196031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.268 [2024-11-20 07:18:57.196069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.268 [2024-11-20 07:18:57.196075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.268 [2024-11-20 07:18:57.196079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.268 [2024-11-20 07:18:57.196083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.268 [2024-11-20 07:18:57.196916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:33.268 [2024-11-20 07:18:57.197015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:33.268 [2024-11-20 07:18:57.197118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.268 [2024-11-20 07:18:57.197120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:33.268 [2024-11-20 07:18:57.225422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:33.864 [2024-11-20 07:18:57.858640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:33.864 Malloc0 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:33.864 [2024-11-20 07:18:57.920985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:33.864 { 00:21:33.864 "params": { 00:21:33.864 "name": "Nvme$subsystem", 00:21:33.864 "trtype": "$TEST_TRANSPORT", 00:21:33.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.864 "adrfam": "ipv4", 00:21:33.864 "trsvcid": "$NVMF_PORT", 00:21:33.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.864 "hdgst": ${hdgst:-false}, 00:21:33.864 "ddgst": ${ddgst:-false} 00:21:33.864 }, 00:21:33.864 "method": "bdev_nvme_attach_controller" 00:21:33.864 } 00:21:33.864 EOF 00:21:33.864 )") 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:21:33.864 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:21:33.864 "params": { 00:21:33.864 "name": "Nvme1", 00:21:33.864 "trtype": "tcp", 00:21:33.864 "traddr": "10.0.0.2", 00:21:33.864 "adrfam": "ipv4", 00:21:33.864 "trsvcid": "4420", 00:21:33.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:33.864 "hdgst": false, 00:21:33.864 "ddgst": false 00:21:33.864 }, 00:21:33.864 "method": "bdev_nvme_attach_controller" 00:21:33.864 }' 00:21:33.864 [2024-11-20 07:18:57.956257] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:33.864 [2024-11-20 07:18:57.956311] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65864 ] 00:21:34.123 [2024-11-20 07:18:58.096989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:34.123 [2024-11-20 07:18:58.134765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.123 [2024-11-20 07:18:58.134951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.123 [2024-11-20 07:18:58.134953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.123 [2024-11-20 07:18:58.173512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:34.123 I/O targets: 00:21:34.123 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:34.123 00:21:34.123 00:21:34.123 CUnit - A unit testing framework for C - Version 2.1-3 00:21:34.123 http://cunit.sourceforge.net/ 00:21:34.123 00:21:34.123 00:21:34.123 Suite: bdevio tests on: Nvme1n1 00:21:34.123 Test: blockdev write read block ...passed 00:21:34.123 Test: blockdev write zeroes read block ...passed 00:21:34.123 Test: blockdev write zeroes read no split ...passed 00:21:34.123 Test: blockdev write zeroes read split ...passed 00:21:34.123 Test: blockdev write zeroes read split partial ...passed 00:21:34.123 Test: blockdev reset ...[2024-11-20 07:18:58.308133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:34.123 [2024-11-20 07:18:58.308242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17aa180 (9): Bad file descriptor 00:21:34.123 [2024-11-20 07:18:58.320902] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:21:34.123 passed 00:21:34.123 Test: blockdev write read 8 blocks ...passed 00:21:34.123 Test: blockdev write read size > 128k ...passed 00:21:34.123 Test: blockdev write read invalid size ...passed 00:21:34.123 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:34.123 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:34.123 Test: blockdev write read max offset ...passed 00:21:34.123 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:34.381 Test: blockdev writev readv 8 blocks ...passed 00:21:34.381 Test: blockdev writev readv 30 x 1block ...passed 00:21:34.381 Test: blockdev writev readv block ...passed 00:21:34.381 Test: blockdev writev readv size > 128k ...passed 00:21:34.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:34.381 Test: blockdev comparev and writev ...[2024-11-20 07:18:58.326343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:34.381 [2024-11-20 07:18:58.326449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.381 [2024-11-20 07:18:58.326515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:34.382 [2024-11-20 07:18:58.326565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:34.382 [2024-11-20 07:18:58.326828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:34.382 [2024-11-20 07:18:58.326882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:34.382 [2024-11-20 07:18:58.326922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:34.382 [2024-11-20 07:18:58.326963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:34.382 [2024-11-20 07:18:58.327247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:34.382 [2024-11-20 07:18:58.327291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:34.382 [2024-11-20 07:18:58.327338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:34.382 [2024-11-20 07:18:58.327373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:34.382 [2024-11-20 07:18:58.327645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:34.382 [2024-11-20 07:18:58.327702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:34.382 [2024-11-20 07:18:58.327741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:34.382 [2024-11-20 07:18:58.327775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:34.382 passed 00:21:34.382 Test: blockdev nvme passthru rw ...passed 00:21:34.382 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:18:58.328368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:34.382 [2024-11-20 07:18:58.328437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:34.382 [2024-11-20 07:18:58.328547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:34.382 [2024-11-20 07:18:58.328598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:34.382 [2024-11-20 07:18:58.328708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:34.382 [2024-11-20 07:18:58.328747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:34.382 [2024-11-20 07:18:58.328852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:34.382 passed 00:21:34.382 Test: blockdev nvme admin passthru ...[2024-11-20 07:18:58.328897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:34.382 passed 00:21:34.382 Test: blockdev copy ...passed 00:21:34.382 00:21:34.382 Run Summary: Type Total Ran Passed Failed Inactive 00:21:34.382 suites 1 1 n/a 0 0 00:21:34.382 tests 23 23 23 0 0 00:21:34.382 asserts 152 152 152 0 n/a 00:21:34.382 00:21:34.382 Elapsed time = 0.135 seconds 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:34.382 rmmod nvme_tcp 00:21:34.382 rmmod nvme_fabrics 00:21:34.382 rmmod nvme_keyring 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 65829 ']' 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 65829 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 65829 ']' 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 65829 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65829 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:21:34.382 killing process with pid 65829 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65829' 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 65829 00:21:34.382 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 65829 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:21:34.640 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:34.641 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:21:34.641 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:34.641 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:21:34.641 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:21:34.641 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:34.641 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:21:34.641 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # continue 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # continue 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:21:34.943 00:21:34.943 real 0m2.387s 00:21:34.943 user 0m7.081s 00:21:34.943 sys 0m0.577s 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.943 ************************************ 00:21:34.943 END TEST nvmf_bdevio 00:21:34.943 ************************************ 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:34.943 00:21:34.943 real 2m24.757s 00:21:34.943 user 6m27.740s 00:21:34.943 sys 0m39.888s 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:21:34.943 ************************************ 00:21:34.943 END TEST nvmf_target_core 00:21:34.943 ************************************ 00:21:34.943 07:18:58 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:21:34.943 07:18:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.943 07:18:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.943 07:18:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:34.943 ************************************ 00:21:34.943 START TEST nvmf_target_extra 00:21:34.943 ************************************ 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:21:34.943 * Looking for test storage... 00:21:34.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:34.943 07:18:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:21:34.943 07:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:34.943 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.943 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.943 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.943 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.943 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.943 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.943 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.943 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.943 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.943 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:34.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.944 --rc genhtml_branch_coverage=1 00:21:34.944 --rc genhtml_function_coverage=1 00:21:34.944 --rc genhtml_legend=1 00:21:34.944 --rc geninfo_all_blocks=1 00:21:34.944 --rc geninfo_unexecuted_blocks=1 00:21:34.944 00:21:34.944 ' 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:34.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.944 --rc genhtml_branch_coverage=1 00:21:34.944 --rc genhtml_function_coverage=1 00:21:34.944 --rc genhtml_legend=1 00:21:34.944 --rc geninfo_all_blocks=1 00:21:34.944 --rc geninfo_unexecuted_blocks=1 00:21:34.944 00:21:34.944 ' 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:34.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.944 --rc genhtml_branch_coverage=1 00:21:34.944 --rc genhtml_function_coverage=1 00:21:34.944 --rc genhtml_legend=1 00:21:34.944 --rc geninfo_all_blocks=1 00:21:34.944 --rc geninfo_unexecuted_blocks=1 00:21:34.944 00:21:34.944 ' 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:34.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.944 --rc genhtml_branch_coverage=1 00:21:34.944 --rc genhtml_function_coverage=1 00:21:34.944 --rc genhtml_legend=1 00:21:34.944 --rc geninfo_all_blocks=1 00:21:34.944 --rc geninfo_unexecuted_blocks=1 00:21:34.944 00:21:34.944 ' 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:34.944 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.944 ************************************ 00:21:34.944 START TEST nvmf_auth_target 00:21:34.944 ************************************ 00:21:34.944 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:35.206 * Looking for test storage... 00:21:35.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.206 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:35.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.206 --rc genhtml_branch_coverage=1 00:21:35.206 --rc genhtml_function_coverage=1 00:21:35.206 --rc genhtml_legend=1 00:21:35.206 --rc geninfo_all_blocks=1 00:21:35.206 --rc geninfo_unexecuted_blocks=1 00:21:35.206 00:21:35.207 ' 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:35.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.207 --rc genhtml_branch_coverage=1 00:21:35.207 --rc genhtml_function_coverage=1 00:21:35.207 --rc genhtml_legend=1 00:21:35.207 --rc geninfo_all_blocks=1 00:21:35.207 --rc geninfo_unexecuted_blocks=1 00:21:35.207 00:21:35.207 ' 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:35.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.207 --rc genhtml_branch_coverage=1 00:21:35.207 --rc genhtml_function_coverage=1 00:21:35.207 --rc genhtml_legend=1 00:21:35.207 --rc geninfo_all_blocks=1 00:21:35.207 --rc geninfo_unexecuted_blocks=1 00:21:35.207 00:21:35.207 ' 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:35.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.207 --rc genhtml_branch_coverage=1 00:21:35.207 --rc genhtml_function_coverage=1 00:21:35.207 --rc genhtml_legend=1 00:21:35.207 --rc geninfo_all_blocks=1 00:21:35.207 --rc geninfo_unexecuted_blocks=1 00:21:35.207 00:21:35.207 ' 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:35.207 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@223 -- # create_target_ns 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:35.207 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # return 0 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up target0 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:21:35.208 10.0.0.1 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:21:35.208 10.0.0.2 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:35.208 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up target1 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:21:35.209 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772163 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:21:35.468 10.0.0.3 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772164 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:21:35.468 10.0.0.4 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:35.468 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator0 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:35.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:21:35.469 00:21:35.469 --- 10.0.0.1 ping statistics --- 00:21:35.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.469 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target0 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target0 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:21:35.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:21:35.469 00:21:35.469 --- 10.0.0.2 ping statistics --- 00:21:35.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.469 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:35.469 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:21:35.470 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:35.470 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:21:35.470 00:21:35.470 --- 10.0.0.3 ping statistics --- 00:21:35.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.470 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:21:35.470 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:35.470 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:21:35.470 00:21:35.470 --- 10.0.0.4 ping statistics --- 00:21:35.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.470 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # return 0 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator0 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target0 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target0 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:35.470 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target1 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target1 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target1 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:21:35.471 ' 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=66142 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 66142 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66142 ']' 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.471 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=66169 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=aae410af572c11b0136e2278bfdaf6aab58cf8999fb616fe 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.Mgo 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key aae410af572c11b0136e2278bfdaf6aab58cf8999fb616fe 0 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 aae410af572c11b0136e2278bfdaf6aab58cf8999fb616fe 0 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=aae410af572c11b0136e2278bfdaf6aab58cf8999fb616fe 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.Mgo 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.Mgo 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Mgo 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=4f2c8e9e5cbefa4b95fabefadd3fae394121b373119582f70f52b7ff299fd7bc 00:21:36.405 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.n0n 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 4f2c8e9e5cbefa4b95fabefadd3fae394121b373119582f70f52b7ff299fd7bc 3 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 4f2c8e9e5cbefa4b95fabefadd3fae394121b373119582f70f52b7ff299fd7bc 3 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=4f2c8e9e5cbefa4b95fabefadd3fae394121b373119582f70f52b7ff299fd7bc 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.n0n 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.n0n 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.n0n 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=5a2c3efe0e655407296297413bf98a9d 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.tDh 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 5a2c3efe0e655407296297413bf98a9d 1 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 5a2c3efe0e655407296297413bf98a9d 1 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=5a2c3efe0e655407296297413bf98a9d 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.tDh 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.tDh 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.tDh 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=7f34634db553bef8f647a67e13531de2a7f041992babbb32 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.o7s 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 7f34634db553bef8f647a67e13531de2a7f041992babbb32 2 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 7f34634db553bef8f647a67e13531de2a7f041992babbb32 2 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=7f34634db553bef8f647a67e13531de2a7f041992babbb32 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.o7s 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.o7s 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.o7s 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=789290194840572ed331f5d768c63b305ed07b2aba8dc93f 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.mfN 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 789290194840572ed331f5d768c63b305ed07b2aba8dc93f 2 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 789290194840572ed331f5d768c63b305ed07b2aba8dc93f 2 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=789290194840572ed331f5d768c63b305ed07b2aba8dc93f 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.mfN 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.mfN 00:21:36.663 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.mfN 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=a6fc29c1165c8e5c146f7def2b9f6521 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.ntJ 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key a6fc29c1165c8e5c146f7def2b9f6521 1 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 a6fc29c1165c8e5c146f7def2b9f6521 1 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=a6fc29c1165c8e5c146f7def2b9f6521 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.ntJ 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.ntJ 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ntJ 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=c31f4da8abbb270cf0de7991e393df6f85befda9bb0d08c7f946f86438198206 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.tQC 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key c31f4da8abbb270cf0de7991e393df6f85befda9bb0d08c7f946f86438198206 3 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 c31f4da8abbb270cf0de7991e393df6f85befda9bb0d08c7f946f86438198206 3 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=c31f4da8abbb270cf0de7991e393df6f85befda9bb0d08c7f946f86438198206 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:21:36.664 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:36.921 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.tQC 00:21:36.921 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.tQC 00:21:36.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.921 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.tQC 00:21:36.921 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:36.921 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 66142 00:21:36.921 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66142 ']' 00:21:36.921 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.921 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.921 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.921 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.921 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:36.921 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.921 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:36.921 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 66169 /var/tmp/host.sock 00:21:36.921 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66169 ']' 00:21:36.921 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:36.921 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.921 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:36.921 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.921 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.178 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.178 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:37.178 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:37.178 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.178 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.178 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.178 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:37.178 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Mgo 00:21:37.178 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.178 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.178 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.179 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Mgo 00:21:37.179 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Mgo 00:21:37.436 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.n0n ]] 00:21:37.436 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.n0n 00:21:37.436 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.436 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.436 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.436 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.n0n 00:21:37.436 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.n0n 00:21:37.695 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:37.695 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tDh 00:21:37.695 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.695 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.695 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.695 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.tDh 00:21:37.695 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.tDh 00:21:37.958 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.o7s ]] 00:21:37.958 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.o7s 00:21:37.958 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.958 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.958 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.958 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.o7s 00:21:37.958 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.o7s 00:21:37.958 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:37.958 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.mfN 00:21:37.958 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.958 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.958 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.958 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.mfN 00:21:37.958 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.mfN 00:21:38.221 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ntJ ]] 00:21:38.221 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ntJ 00:21:38.221 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.221 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.221 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.221 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ntJ 00:21:38.221 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ntJ 00:21:38.480 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:38.480 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tQC 00:21:38.480 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.480 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.480 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.480 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.tQC 00:21:38.480 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.tQC 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.738 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.997 00:21:38.997 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.997 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.997 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.255 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.255 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.255 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.255 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.255 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.255 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.255 { 00:21:39.255 "cntlid": 1, 00:21:39.255 "qid": 0, 00:21:39.255 "state": "enabled", 00:21:39.255 "thread": "nvmf_tgt_poll_group_000", 00:21:39.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:21:39.255 "listen_address": { 00:21:39.255 "trtype": "TCP", 00:21:39.255 "adrfam": "IPv4", 00:21:39.255 "traddr": "10.0.0.2", 00:21:39.255 "trsvcid": "4420" 00:21:39.255 }, 00:21:39.255 "peer_address": { 00:21:39.255 "trtype": "TCP", 00:21:39.255 "adrfam": "IPv4", 00:21:39.255 "traddr": "10.0.0.1", 00:21:39.255 "trsvcid": "39204" 00:21:39.255 }, 00:21:39.255 "auth": { 00:21:39.255 "state": "completed", 00:21:39.255 "digest": "sha256", 00:21:39.255 "dhgroup": "null" 00:21:39.255 } 00:21:39.255 } 00:21:39.255 ]' 00:21:39.255 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.255 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:39.255 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.512 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:39.512 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.512 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.512 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.512 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.770 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:21:39.770 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:21:43.068 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.068 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:43.068 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.068 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.068 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.068 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.068 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:43.068 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.068 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.327 00:21:43.327 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.327 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.327 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.585 { 00:21:43.585 "cntlid": 3, 00:21:43.585 "qid": 0, 00:21:43.585 "state": "enabled", 00:21:43.585 "thread": "nvmf_tgt_poll_group_000", 00:21:43.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:21:43.585 "listen_address": { 00:21:43.585 "trtype": "TCP", 00:21:43.585 "adrfam": "IPv4", 00:21:43.585 "traddr": "10.0.0.2", 00:21:43.585 "trsvcid": "4420" 00:21:43.585 }, 00:21:43.585 "peer_address": { 00:21:43.585 "trtype": "TCP", 00:21:43.585 "adrfam": "IPv4", 00:21:43.585 "traddr": "10.0.0.1", 00:21:43.585 "trsvcid": "39224" 00:21:43.585 }, 00:21:43.585 "auth": { 00:21:43.585 "state": "completed", 00:21:43.585 "digest": "sha256", 00:21:43.585 "dhgroup": "null" 00:21:43.585 } 00:21:43.585 } 00:21:43.585 ]' 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.585 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.844 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:21:43.844 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:21:44.410 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.410 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:44.410 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.410 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.410 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.410 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.410 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:44.410 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.669 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.928 00:21:44.928 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.928 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.928 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.195 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.195 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.195 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.195 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.195 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.195 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.195 { 00:21:45.195 "cntlid": 5, 00:21:45.195 "qid": 0, 00:21:45.195 "state": "enabled", 00:21:45.195 "thread": "nvmf_tgt_poll_group_000", 00:21:45.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:21:45.195 "listen_address": { 00:21:45.195 "trtype": "TCP", 00:21:45.195 "adrfam": "IPv4", 00:21:45.195 "traddr": "10.0.0.2", 00:21:45.195 "trsvcid": "4420" 00:21:45.195 }, 00:21:45.195 "peer_address": { 00:21:45.195 "trtype": "TCP", 00:21:45.195 "adrfam": "IPv4", 00:21:45.195 "traddr": "10.0.0.1", 00:21:45.195 "trsvcid": "39236" 00:21:45.195 }, 00:21:45.195 "auth": { 00:21:45.195 "state": "completed", 00:21:45.195 "digest": "sha256", 00:21:45.195 "dhgroup": "null" 00:21:45.195 } 00:21:45.196 } 00:21:45.196 ]' 00:21:45.196 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.196 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:45.196 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.196 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:45.196 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.196 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.196 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.196 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.460 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:21:45.460 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:21:46.026 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.026 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:46.026 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.026 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.026 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.026 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.026 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:46.026 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.284 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.542 00:21:46.542 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.542 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.542 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.801 { 00:21:46.801 "cntlid": 7, 00:21:46.801 "qid": 0, 00:21:46.801 "state": "enabled", 00:21:46.801 "thread": "nvmf_tgt_poll_group_000", 00:21:46.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:21:46.801 "listen_address": { 00:21:46.801 "trtype": "TCP", 00:21:46.801 "adrfam": "IPv4", 00:21:46.801 "traddr": "10.0.0.2", 00:21:46.801 "trsvcid": "4420" 00:21:46.801 }, 00:21:46.801 "peer_address": { 00:21:46.801 "trtype": "TCP", 00:21:46.801 "adrfam": "IPv4", 00:21:46.801 "traddr": "10.0.0.1", 00:21:46.801 "trsvcid": "39258" 00:21:46.801 }, 00:21:46.801 "auth": { 00:21:46.801 "state": "completed", 00:21:46.801 "digest": "sha256", 00:21:46.801 "dhgroup": "null" 00:21:46.801 } 00:21:46.801 } 00:21:46.801 ]' 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.801 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.060 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:21:47.060 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:21:47.626 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.626 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:47.626 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.626 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.626 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.626 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.626 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.626 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:47.626 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.884 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.143 00:21:48.143 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.143 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.143 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.402 { 00:21:48.402 "cntlid": 9, 00:21:48.402 "qid": 0, 00:21:48.402 "state": "enabled", 00:21:48.402 "thread": "nvmf_tgt_poll_group_000", 00:21:48.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:21:48.402 "listen_address": { 00:21:48.402 "trtype": "TCP", 00:21:48.402 "adrfam": "IPv4", 00:21:48.402 "traddr": "10.0.0.2", 00:21:48.402 "trsvcid": "4420" 00:21:48.402 }, 00:21:48.402 "peer_address": { 00:21:48.402 "trtype": "TCP", 00:21:48.402 "adrfam": "IPv4", 00:21:48.402 "traddr": "10.0.0.1", 00:21:48.402 "trsvcid": "51658" 00:21:48.402 }, 00:21:48.402 "auth": { 00:21:48.402 "state": "completed", 00:21:48.402 "digest": "sha256", 00:21:48.402 "dhgroup": "ffdhe2048" 00:21:48.402 } 00:21:48.402 } 00:21:48.402 ]' 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.402 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.659 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:21:48.659 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:21:49.225 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.225 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:49.225 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.225 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.225 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.225 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.225 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:49.225 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.483 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.742 00:21:49.742 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.742 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.742 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.000 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.000 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.000 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.000 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.000 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.000 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.000 { 00:21:50.000 "cntlid": 11, 00:21:50.000 "qid": 0, 00:21:50.000 "state": "enabled", 00:21:50.000 "thread": "nvmf_tgt_poll_group_000", 00:21:50.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:21:50.000 "listen_address": { 00:21:50.000 "trtype": "TCP", 00:21:50.000 "adrfam": "IPv4", 00:21:50.000 "traddr": "10.0.0.2", 00:21:50.000 "trsvcid": "4420" 00:21:50.000 }, 00:21:50.000 "peer_address": { 00:21:50.000 "trtype": "TCP", 00:21:50.000 "adrfam": "IPv4", 00:21:50.000 "traddr": "10.0.0.1", 00:21:50.000 "trsvcid": "51666" 00:21:50.000 }, 00:21:50.000 "auth": { 00:21:50.000 "state": "completed", 00:21:50.000 "digest": "sha256", 00:21:50.000 "dhgroup": "ffdhe2048" 00:21:50.000 } 00:21:50.000 } 00:21:50.000 ]' 00:21:50.000 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.000 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.000 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.000 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:50.000 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.258 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.258 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.258 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.258 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:21:50.258 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:21:50.825 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.825 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:50.825 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.825 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.825 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.825 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.825 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:50.825 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.083 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.341 00:21:51.341 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.341 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.341 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.599 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.599 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.599 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.599 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.599 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.599 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.599 { 00:21:51.599 "cntlid": 13, 00:21:51.599 "qid": 0, 00:21:51.599 "state": "enabled", 00:21:51.599 "thread": "nvmf_tgt_poll_group_000", 00:21:51.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:21:51.599 "listen_address": { 00:21:51.599 "trtype": "TCP", 00:21:51.599 "adrfam": "IPv4", 00:21:51.599 "traddr": "10.0.0.2", 00:21:51.599 "trsvcid": "4420" 00:21:51.599 }, 00:21:51.599 "peer_address": { 00:21:51.599 "trtype": "TCP", 00:21:51.599 "adrfam": "IPv4", 00:21:51.599 "traddr": "10.0.0.1", 00:21:51.599 "trsvcid": "51682" 00:21:51.599 }, 00:21:51.599 "auth": { 00:21:51.599 "state": "completed", 00:21:51.599 "digest": "sha256", 00:21:51.599 "dhgroup": "ffdhe2048" 00:21:51.599 } 00:21:51.599 } 00:21:51.599 ]' 00:21:51.599 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.599 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:51.599 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.940 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:51.940 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.940 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.940 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.940 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.940 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:21:51.940 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:21:52.526 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.527 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:52.527 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.527 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.527 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.527 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.527 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:52.527 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.785 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.043 00:21:53.043 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.043 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.043 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.301 { 00:21:53.301 "cntlid": 15, 00:21:53.301 "qid": 0, 00:21:53.301 "state": "enabled", 00:21:53.301 "thread": "nvmf_tgt_poll_group_000", 00:21:53.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:21:53.301 "listen_address": { 00:21:53.301 "trtype": "TCP", 00:21:53.301 "adrfam": "IPv4", 00:21:53.301 "traddr": "10.0.0.2", 00:21:53.301 "trsvcid": "4420" 00:21:53.301 }, 00:21:53.301 "peer_address": { 00:21:53.301 "trtype": "TCP", 00:21:53.301 "adrfam": "IPv4", 00:21:53.301 "traddr": "10.0.0.1", 00:21:53.301 "trsvcid": "51716" 00:21:53.301 }, 00:21:53.301 "auth": { 00:21:53.301 "state": "completed", 00:21:53.301 "digest": "sha256", 00:21:53.301 "dhgroup": "ffdhe2048" 00:21:53.301 } 00:21:53.301 } 00:21:53.301 ]' 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.301 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.560 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:21:53.560 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:21:54.127 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.127 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:54.127 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.127 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.127 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.127 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.127 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.127 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:54.127 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.386 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.386 00:21:54.643 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.643 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.643 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.643 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.643 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.643 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.643 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.643 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.643 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.643 { 00:21:54.643 "cntlid": 17, 00:21:54.643 "qid": 0, 00:21:54.643 "state": "enabled", 00:21:54.643 "thread": "nvmf_tgt_poll_group_000", 00:21:54.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:21:54.643 "listen_address": { 00:21:54.643 "trtype": "TCP", 00:21:54.643 "adrfam": "IPv4", 00:21:54.643 "traddr": "10.0.0.2", 00:21:54.643 "trsvcid": "4420" 00:21:54.643 }, 00:21:54.643 "peer_address": { 00:21:54.643 "trtype": "TCP", 00:21:54.643 "adrfam": "IPv4", 00:21:54.643 "traddr": "10.0.0.1", 00:21:54.643 "trsvcid": "51746" 00:21:54.643 }, 00:21:54.643 "auth": { 00:21:54.644 "state": "completed", 00:21:54.644 "digest": "sha256", 00:21:54.644 "dhgroup": "ffdhe3072" 00:21:54.644 } 00:21:54.644 } 00:21:54.644 ]' 00:21:54.644 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.900 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.900 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.900 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.900 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.900 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.900 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.900 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.160 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:21:55.160 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:21:55.724 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.724 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:55.724 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.725 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.289 00:21:56.289 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.289 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.289 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.289 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.289 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.289 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.289 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.289 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.289 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.289 { 00:21:56.289 "cntlid": 19, 00:21:56.289 "qid": 0, 00:21:56.289 "state": "enabled", 00:21:56.289 "thread": "nvmf_tgt_poll_group_000", 00:21:56.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:21:56.289 "listen_address": { 00:21:56.289 "trtype": "TCP", 00:21:56.289 "adrfam": "IPv4", 00:21:56.289 "traddr": "10.0.0.2", 00:21:56.289 "trsvcid": "4420" 00:21:56.289 }, 00:21:56.289 "peer_address": { 00:21:56.289 "trtype": "TCP", 00:21:56.289 "adrfam": "IPv4", 00:21:56.289 "traddr": "10.0.0.1", 00:21:56.289 "trsvcid": "51786" 00:21:56.289 }, 00:21:56.289 "auth": { 00:21:56.289 "state": "completed", 00:21:56.289 "digest": "sha256", 00:21:56.289 "dhgroup": "ffdhe3072" 00:21:56.289 } 00:21:56.289 } 00:21:56.289 ]' 00:21:56.289 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.289 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:56.289 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.546 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:56.546 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.546 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.546 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.547 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.803 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:21:56.803 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.368 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.369 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.369 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.369 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.369 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.369 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.369 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.627 00:21:57.627 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.627 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.627 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.884 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.884 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.884 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.884 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.884 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.884 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.884 { 00:21:57.884 "cntlid": 21, 00:21:57.884 "qid": 0, 00:21:57.884 "state": "enabled", 00:21:57.884 "thread": "nvmf_tgt_poll_group_000", 00:21:57.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:21:57.884 "listen_address": { 00:21:57.884 "trtype": "TCP", 00:21:57.884 "adrfam": "IPv4", 00:21:57.884 "traddr": "10.0.0.2", 00:21:57.884 "trsvcid": "4420" 00:21:57.884 }, 00:21:57.884 "peer_address": { 00:21:57.884 "trtype": "TCP", 00:21:57.884 "adrfam": "IPv4", 00:21:57.884 "traddr": "10.0.0.1", 00:21:57.884 "trsvcid": "34900" 00:21:57.884 }, 00:21:57.884 "auth": { 00:21:57.884 "state": "completed", 00:21:57.884 "digest": "sha256", 00:21:57.884 "dhgroup": "ffdhe3072" 00:21:57.884 } 00:21:57.884 } 00:21:57.884 ]' 00:21:57.884 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.884 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:57.884 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.141 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:58.141 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.141 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.141 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.141 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.142 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:21:58.142 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:21:58.717 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.717 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:21:58.717 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.717 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.975 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.975 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.975 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:58.975 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.975 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.233 00:21:59.233 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.233 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.233 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.492 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.492 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.492 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.492 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.492 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.492 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.492 { 00:21:59.492 "cntlid": 23, 00:21:59.492 "qid": 0, 00:21:59.492 "state": "enabled", 00:21:59.492 "thread": "nvmf_tgt_poll_group_000", 00:21:59.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:21:59.492 "listen_address": { 00:21:59.492 "trtype": "TCP", 00:21:59.492 "adrfam": "IPv4", 00:21:59.492 "traddr": "10.0.0.2", 00:21:59.492 "trsvcid": "4420" 00:21:59.492 }, 00:21:59.492 "peer_address": { 00:21:59.492 "trtype": "TCP", 00:21:59.492 "adrfam": "IPv4", 00:21:59.492 "traddr": "10.0.0.1", 00:21:59.492 "trsvcid": "34922" 00:21:59.492 }, 00:21:59.492 "auth": { 00:21:59.492 "state": "completed", 00:21:59.492 "digest": "sha256", 00:21:59.492 "dhgroup": "ffdhe3072" 00:21:59.492 } 00:21:59.492 } 00:21:59.492 ]' 00:21:59.492 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.492 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:59.492 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.750 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:59.750 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.750 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.750 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.750 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.750 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:21:59.750 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:00.316 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.316 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:00.316 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.316 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.575 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.833 00:22:01.091 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.091 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.091 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.091 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.091 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.091 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.091 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.091 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.091 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.091 { 00:22:01.091 "cntlid": 25, 00:22:01.091 "qid": 0, 00:22:01.091 "state": "enabled", 00:22:01.091 "thread": "nvmf_tgt_poll_group_000", 00:22:01.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:01.091 "listen_address": { 00:22:01.091 "trtype": "TCP", 00:22:01.091 "adrfam": "IPv4", 00:22:01.091 "traddr": "10.0.0.2", 00:22:01.091 "trsvcid": "4420" 00:22:01.091 }, 00:22:01.091 "peer_address": { 00:22:01.091 "trtype": "TCP", 00:22:01.091 "adrfam": "IPv4", 00:22:01.091 "traddr": "10.0.0.1", 00:22:01.091 "trsvcid": "34946" 00:22:01.091 }, 00:22:01.091 "auth": { 00:22:01.091 "state": "completed", 00:22:01.091 "digest": "sha256", 00:22:01.091 "dhgroup": "ffdhe4096" 00:22:01.091 } 00:22:01.091 } 00:22:01.091 ]' 00:22:01.091 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.091 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:01.349 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.349 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.349 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.349 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.349 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.349 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.606 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:01.607 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.172 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.737 00:22:02.737 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.737 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.737 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.737 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.737 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.737 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.737 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.737 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.737 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.737 { 00:22:02.737 "cntlid": 27, 00:22:02.737 "qid": 0, 00:22:02.737 "state": "enabled", 00:22:02.737 "thread": "nvmf_tgt_poll_group_000", 00:22:02.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:02.737 "listen_address": { 00:22:02.737 "trtype": "TCP", 00:22:02.737 "adrfam": "IPv4", 00:22:02.737 "traddr": "10.0.0.2", 00:22:02.737 "trsvcid": "4420" 00:22:02.737 }, 00:22:02.737 "peer_address": { 00:22:02.737 "trtype": "TCP", 00:22:02.737 "adrfam": "IPv4", 00:22:02.737 "traddr": "10.0.0.1", 00:22:02.737 "trsvcid": "34970" 00:22:02.737 }, 00:22:02.737 "auth": { 00:22:02.737 "state": "completed", 00:22:02.737 "digest": "sha256", 00:22:02.737 "dhgroup": "ffdhe4096" 00:22:02.737 } 00:22:02.737 } 00:22:02.737 ]' 00:22:02.737 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.737 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:02.737 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.994 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:02.994 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.994 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.994 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.994 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.251 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:03.251 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.815 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.815 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.815 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.381 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.381 { 00:22:04.381 "cntlid": 29, 00:22:04.381 "qid": 0, 00:22:04.381 "state": "enabled", 00:22:04.381 "thread": "nvmf_tgt_poll_group_000", 00:22:04.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:04.381 "listen_address": { 00:22:04.381 "trtype": "TCP", 00:22:04.381 "adrfam": "IPv4", 00:22:04.381 "traddr": "10.0.0.2", 00:22:04.381 "trsvcid": "4420" 00:22:04.381 }, 00:22:04.381 "peer_address": { 00:22:04.381 "trtype": "TCP", 00:22:04.381 "adrfam": "IPv4", 00:22:04.381 "traddr": "10.0.0.1", 00:22:04.381 "trsvcid": "34986" 00:22:04.381 }, 00:22:04.381 "auth": { 00:22:04.381 "state": "completed", 00:22:04.381 "digest": "sha256", 00:22:04.381 "dhgroup": "ffdhe4096" 00:22:04.381 } 00:22:04.381 } 00:22:04.381 ]' 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:04.381 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.640 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.640 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.640 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.640 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:04.640 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:05.230 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.230 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:05.230 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.230 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.230 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.230 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.230 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:05.230 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.487 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.745 00:22:05.745 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.745 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.745 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.002 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.002 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.002 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.002 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.002 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.002 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.002 { 00:22:06.002 "cntlid": 31, 00:22:06.002 "qid": 0, 00:22:06.002 "state": "enabled", 00:22:06.002 "thread": "nvmf_tgt_poll_group_000", 00:22:06.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:06.002 "listen_address": { 00:22:06.002 "trtype": "TCP", 00:22:06.002 "adrfam": "IPv4", 00:22:06.002 "traddr": "10.0.0.2", 00:22:06.002 "trsvcid": "4420" 00:22:06.002 }, 00:22:06.002 "peer_address": { 00:22:06.002 "trtype": "TCP", 00:22:06.002 "adrfam": "IPv4", 00:22:06.002 "traddr": "10.0.0.1", 00:22:06.002 "trsvcid": "35018" 00:22:06.002 }, 00:22:06.002 "auth": { 00:22:06.002 "state": "completed", 00:22:06.002 "digest": "sha256", 00:22:06.002 "dhgroup": "ffdhe4096" 00:22:06.002 } 00:22:06.002 } 00:22:06.002 ]' 00:22:06.002 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.002 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:06.002 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.261 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:06.261 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.261 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.261 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.261 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.261 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:06.261 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:06.827 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.084 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.650 00:22:07.650 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.650 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.650 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.911 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.911 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.911 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.911 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.911 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.911 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.911 { 00:22:07.911 "cntlid": 33, 00:22:07.911 "qid": 0, 00:22:07.911 "state": "enabled", 00:22:07.911 "thread": "nvmf_tgt_poll_group_000", 00:22:07.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:07.911 "listen_address": { 00:22:07.911 "trtype": "TCP", 00:22:07.911 "adrfam": "IPv4", 00:22:07.911 "traddr": "10.0.0.2", 00:22:07.911 "trsvcid": "4420" 00:22:07.911 }, 00:22:07.911 "peer_address": { 00:22:07.911 "trtype": "TCP", 00:22:07.911 "adrfam": "IPv4", 00:22:07.911 "traddr": "10.0.0.1", 00:22:07.911 "trsvcid": "37454" 00:22:07.911 }, 00:22:07.911 "auth": { 00:22:07.911 "state": "completed", 00:22:07.911 "digest": "sha256", 00:22:07.911 "dhgroup": "ffdhe6144" 00:22:07.911 } 00:22:07.911 } 00:22:07.911 ]' 00:22:07.911 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.911 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:07.911 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.911 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.911 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.911 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.911 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.911 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.169 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:08.169 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:08.783 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.783 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:08.783 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.783 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.783 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.783 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.783 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:08.783 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:09.040 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:22:09.040 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.040 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:09.040 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:09.040 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.040 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.040 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.040 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.040 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.040 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.040 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.040 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.040 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.296 00:22:09.296 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.296 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.296 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.553 { 00:22:09.553 "cntlid": 35, 00:22:09.553 "qid": 0, 00:22:09.553 "state": "enabled", 00:22:09.553 "thread": "nvmf_tgt_poll_group_000", 00:22:09.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:09.553 "listen_address": { 00:22:09.553 "trtype": "TCP", 00:22:09.553 "adrfam": "IPv4", 00:22:09.553 "traddr": "10.0.0.2", 00:22:09.553 "trsvcid": "4420" 00:22:09.553 }, 00:22:09.553 "peer_address": { 00:22:09.553 "trtype": "TCP", 00:22:09.553 "adrfam": "IPv4", 00:22:09.553 "traddr": "10.0.0.1", 00:22:09.553 "trsvcid": "37472" 00:22:09.553 }, 00:22:09.553 "auth": { 00:22:09.553 "state": "completed", 00:22:09.553 "digest": "sha256", 00:22:09.553 "dhgroup": "ffdhe6144" 00:22:09.553 } 00:22:09.553 } 00:22:09.553 ]' 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.553 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.809 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:09.809 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:10.438 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.439 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:10.439 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.439 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.439 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.439 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.439 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:10.439 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:10.719 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:22:10.719 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.719 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:10.719 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:10.720 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:10.720 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.720 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.720 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.720 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.720 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.720 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.720 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.720 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.977 00:22:10.977 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.977 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.977 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.234 { 00:22:11.234 "cntlid": 37, 00:22:11.234 "qid": 0, 00:22:11.234 "state": "enabled", 00:22:11.234 "thread": "nvmf_tgt_poll_group_000", 00:22:11.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:11.234 "listen_address": { 00:22:11.234 "trtype": "TCP", 00:22:11.234 "adrfam": "IPv4", 00:22:11.234 "traddr": "10.0.0.2", 00:22:11.234 "trsvcid": "4420" 00:22:11.234 }, 00:22:11.234 "peer_address": { 00:22:11.234 "trtype": "TCP", 00:22:11.234 "adrfam": "IPv4", 00:22:11.234 "traddr": "10.0.0.1", 00:22:11.234 "trsvcid": "37494" 00:22:11.234 }, 00:22:11.234 "auth": { 00:22:11.234 "state": "completed", 00:22:11.234 "digest": "sha256", 00:22:11.234 "dhgroup": "ffdhe6144" 00:22:11.234 } 00:22:11.234 } 00:22:11.234 ]' 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.234 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.492 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:11.492 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:12.057 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.057 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:12.057 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.057 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.057 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.057 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.057 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:12.057 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.315 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.881 00:22:12.881 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.881 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.881 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.881 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.881 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.881 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.881 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.881 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.881 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.881 { 00:22:12.881 "cntlid": 39, 00:22:12.881 "qid": 0, 00:22:12.881 "state": "enabled", 00:22:12.881 "thread": "nvmf_tgt_poll_group_000", 00:22:12.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:12.881 "listen_address": { 00:22:12.881 "trtype": "TCP", 00:22:12.881 "adrfam": "IPv4", 00:22:12.881 "traddr": "10.0.0.2", 00:22:12.881 "trsvcid": "4420" 00:22:12.881 }, 00:22:12.881 "peer_address": { 00:22:12.881 "trtype": "TCP", 00:22:12.881 "adrfam": "IPv4", 00:22:12.881 "traddr": "10.0.0.1", 00:22:12.881 "trsvcid": "37514" 00:22:12.881 }, 00:22:12.881 "auth": { 00:22:12.881 "state": "completed", 00:22:12.881 "digest": "sha256", 00:22:12.881 "dhgroup": "ffdhe6144" 00:22:12.881 } 00:22:12.881 } 00:22:12.881 ]' 00:22:12.881 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.881 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:12.881 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.139 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.139 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.139 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.139 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.139 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.139 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:13.139 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:14.073 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.073 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:14.073 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.073 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.073 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.073 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.073 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.073 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:14.073 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.073 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.638 00:22:14.638 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.638 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.638 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.895 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.895 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.895 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.895 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.895 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.895 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.895 { 00:22:14.895 "cntlid": 41, 00:22:14.895 "qid": 0, 00:22:14.895 "state": "enabled", 00:22:14.895 "thread": "nvmf_tgt_poll_group_000", 00:22:14.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:14.895 "listen_address": { 00:22:14.895 "trtype": "TCP", 00:22:14.895 "adrfam": "IPv4", 00:22:14.895 "traddr": "10.0.0.2", 00:22:14.895 "trsvcid": "4420" 00:22:14.895 }, 00:22:14.895 "peer_address": { 00:22:14.895 "trtype": "TCP", 00:22:14.895 "adrfam": "IPv4", 00:22:14.895 "traddr": "10.0.0.1", 00:22:14.895 "trsvcid": "37534" 00:22:14.895 }, 00:22:14.895 "auth": { 00:22:14.895 "state": "completed", 00:22:14.895 "digest": "sha256", 00:22:14.895 "dhgroup": "ffdhe8192" 00:22:14.895 } 00:22:14.895 } 00:22:14.895 ]' 00:22:14.895 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.895 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:14.895 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.895 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.895 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.895 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.895 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.895 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.156 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:15.156 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:15.725 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.725 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:15.726 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.726 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.726 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.726 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.726 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:15.726 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:15.988 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:22:15.988 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.988 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:15.988 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:15.988 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:15.988 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.988 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.988 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.988 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.988 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.989 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.989 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.989 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.556 00:22:16.556 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.556 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.556 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.556 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.556 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.556 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.556 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.813 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.813 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.813 { 00:22:16.813 "cntlid": 43, 00:22:16.813 "qid": 0, 00:22:16.813 "state": "enabled", 00:22:16.813 "thread": "nvmf_tgt_poll_group_000", 00:22:16.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:16.813 "listen_address": { 00:22:16.813 "trtype": "TCP", 00:22:16.813 "adrfam": "IPv4", 00:22:16.813 "traddr": "10.0.0.2", 00:22:16.813 "trsvcid": "4420" 00:22:16.813 }, 00:22:16.813 "peer_address": { 00:22:16.813 "trtype": "TCP", 00:22:16.813 "adrfam": "IPv4", 00:22:16.813 "traddr": "10.0.0.1", 00:22:16.813 "trsvcid": "37564" 00:22:16.813 }, 00:22:16.813 "auth": { 00:22:16.813 "state": "completed", 00:22:16.813 "digest": "sha256", 00:22:16.813 "dhgroup": "ffdhe8192" 00:22:16.813 } 00:22:16.813 } 00:22:16.813 ]' 00:22:16.813 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.813 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:16.814 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.814 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:16.814 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.814 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.814 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.814 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.071 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:17.071 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:17.637 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.637 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:17.637 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.637 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.637 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.637 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.637 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:17.637 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.894 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.460 00:22:18.460 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.460 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.460 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.460 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.460 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.460 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.460 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.460 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.460 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.460 { 00:22:18.460 "cntlid": 45, 00:22:18.460 "qid": 0, 00:22:18.460 "state": "enabled", 00:22:18.460 "thread": "nvmf_tgt_poll_group_000", 00:22:18.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:18.460 "listen_address": { 00:22:18.460 "trtype": "TCP", 00:22:18.460 "adrfam": "IPv4", 00:22:18.460 "traddr": "10.0.0.2", 00:22:18.460 "trsvcid": "4420" 00:22:18.460 }, 00:22:18.460 "peer_address": { 00:22:18.460 "trtype": "TCP", 00:22:18.460 "adrfam": "IPv4", 00:22:18.460 "traddr": "10.0.0.1", 00:22:18.460 "trsvcid": "32830" 00:22:18.460 }, 00:22:18.460 "auth": { 00:22:18.460 "state": "completed", 00:22:18.460 "digest": "sha256", 00:22:18.460 "dhgroup": "ffdhe8192" 00:22:18.460 } 00:22:18.460 } 00:22:18.461 ]' 00:22:18.461 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.461 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:18.461 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.976 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:18.976 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.544 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:20.110 00:22:20.110 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.110 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.110 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.369 { 00:22:20.369 "cntlid": 47, 00:22:20.369 "qid": 0, 00:22:20.369 "state": "enabled", 00:22:20.369 "thread": "nvmf_tgt_poll_group_000", 00:22:20.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:20.369 "listen_address": { 00:22:20.369 "trtype": "TCP", 00:22:20.369 "adrfam": "IPv4", 00:22:20.369 "traddr": "10.0.0.2", 00:22:20.369 "trsvcid": "4420" 00:22:20.369 }, 00:22:20.369 "peer_address": { 00:22:20.369 "trtype": "TCP", 00:22:20.369 "adrfam": "IPv4", 00:22:20.369 "traddr": "10.0.0.1", 00:22:20.369 "trsvcid": "32836" 00:22:20.369 }, 00:22:20.369 "auth": { 00:22:20.369 "state": "completed", 00:22:20.369 "digest": "sha256", 00:22:20.369 "dhgroup": "ffdhe8192" 00:22:20.369 } 00:22:20.369 } 00:22:20.369 ]' 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.369 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.631 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:20.631 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:21.196 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.196 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:21.196 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.196 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.196 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.196 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:21.196 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.196 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.196 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:21.196 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:21.454 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:22:21.454 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.454 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:21.454 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:21.454 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:21.454 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.454 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.455 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.455 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.455 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.455 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.455 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.455 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.719 00:22:21.719 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.719 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.719 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.980 { 00:22:21.980 "cntlid": 49, 00:22:21.980 "qid": 0, 00:22:21.980 "state": "enabled", 00:22:21.980 "thread": "nvmf_tgt_poll_group_000", 00:22:21.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:21.980 "listen_address": { 00:22:21.980 "trtype": "TCP", 00:22:21.980 "adrfam": "IPv4", 00:22:21.980 "traddr": "10.0.0.2", 00:22:21.980 "trsvcid": "4420" 00:22:21.980 }, 00:22:21.980 "peer_address": { 00:22:21.980 "trtype": "TCP", 00:22:21.980 "adrfam": "IPv4", 00:22:21.980 "traddr": "10.0.0.1", 00:22:21.980 "trsvcid": "32852" 00:22:21.980 }, 00:22:21.980 "auth": { 00:22:21.980 "state": "completed", 00:22:21.980 "digest": "sha384", 00:22:21.980 "dhgroup": "null" 00:22:21.980 } 00:22:21.980 } 00:22:21.980 ]' 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.980 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.354 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:22.354 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:22.920 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.920 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:22.920 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.920 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.920 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.920 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.920 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:22.920 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.178 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.436 00:22:23.436 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.436 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.436 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.693 { 00:22:23.693 "cntlid": 51, 00:22:23.693 "qid": 0, 00:22:23.693 "state": "enabled", 00:22:23.693 "thread": "nvmf_tgt_poll_group_000", 00:22:23.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:23.693 "listen_address": { 00:22:23.693 "trtype": "TCP", 00:22:23.693 "adrfam": "IPv4", 00:22:23.693 "traddr": "10.0.0.2", 00:22:23.693 "trsvcid": "4420" 00:22:23.693 }, 00:22:23.693 "peer_address": { 00:22:23.693 "trtype": "TCP", 00:22:23.693 "adrfam": "IPv4", 00:22:23.693 "traddr": "10.0.0.1", 00:22:23.693 "trsvcid": "32878" 00:22:23.693 }, 00:22:23.693 "auth": { 00:22:23.693 "state": "completed", 00:22:23.693 "digest": "sha384", 00:22:23.693 "dhgroup": "null" 00:22:23.693 } 00:22:23.693 } 00:22:23.693 ]' 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.693 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.959 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:23.959 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:24.524 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.524 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:24.524 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.524 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.524 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.524 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.524 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:24.524 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:24.782 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:22:24.782 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.782 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:24.782 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:24.782 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:24.782 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.782 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.782 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.782 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.782 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.783 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.783 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.783 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.040 00:22:25.040 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.040 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.040 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.297 { 00:22:25.297 "cntlid": 53, 00:22:25.297 "qid": 0, 00:22:25.297 "state": "enabled", 00:22:25.297 "thread": "nvmf_tgt_poll_group_000", 00:22:25.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:25.297 "listen_address": { 00:22:25.297 "trtype": "TCP", 00:22:25.297 "adrfam": "IPv4", 00:22:25.297 "traddr": "10.0.0.2", 00:22:25.297 "trsvcid": "4420" 00:22:25.297 }, 00:22:25.297 "peer_address": { 00:22:25.297 "trtype": "TCP", 00:22:25.297 "adrfam": "IPv4", 00:22:25.297 "traddr": "10.0.0.1", 00:22:25.297 "trsvcid": "32910" 00:22:25.297 }, 00:22:25.297 "auth": { 00:22:25.297 "state": "completed", 00:22:25.297 "digest": "sha384", 00:22:25.297 "dhgroup": "null" 00:22:25.297 } 00:22:25.297 } 00:22:25.297 ]' 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.297 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.555 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:25.555 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:26.120 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.120 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:26.120 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.120 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.120 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.121 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.121 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:26.121 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.378 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.636 00:22:26.636 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.636 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.636 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.893 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.893 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.893 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.893 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.893 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.893 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.893 { 00:22:26.893 "cntlid": 55, 00:22:26.893 "qid": 0, 00:22:26.893 "state": "enabled", 00:22:26.893 "thread": "nvmf_tgt_poll_group_000", 00:22:26.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:26.893 "listen_address": { 00:22:26.893 "trtype": "TCP", 00:22:26.893 "adrfam": "IPv4", 00:22:26.893 "traddr": "10.0.0.2", 00:22:26.893 "trsvcid": "4420" 00:22:26.893 }, 00:22:26.893 "peer_address": { 00:22:26.893 "trtype": "TCP", 00:22:26.893 "adrfam": "IPv4", 00:22:26.893 "traddr": "10.0.0.1", 00:22:26.893 "trsvcid": "32932" 00:22:26.893 }, 00:22:26.893 "auth": { 00:22:26.893 "state": "completed", 00:22:26.893 "digest": "sha384", 00:22:26.893 "dhgroup": "null" 00:22:26.893 } 00:22:26.893 } 00:22:26.893 ]' 00:22:26.893 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.893 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:26.893 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.893 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:26.893 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.151 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.151 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.151 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.151 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:27.151 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:28.084 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.084 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:28.084 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.084 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.084 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.084 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.084 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.084 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:28.084 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.084 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.342 00:22:28.342 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.342 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.342 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.601 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.601 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.601 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.601 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.601 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.601 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.601 { 00:22:28.601 "cntlid": 57, 00:22:28.601 "qid": 0, 00:22:28.601 "state": "enabled", 00:22:28.601 "thread": "nvmf_tgt_poll_group_000", 00:22:28.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:28.601 "listen_address": { 00:22:28.601 "trtype": "TCP", 00:22:28.601 "adrfam": "IPv4", 00:22:28.601 "traddr": "10.0.0.2", 00:22:28.601 "trsvcid": "4420" 00:22:28.601 }, 00:22:28.601 "peer_address": { 00:22:28.601 "trtype": "TCP", 00:22:28.601 "adrfam": "IPv4", 00:22:28.601 "traddr": "10.0.0.1", 00:22:28.601 "trsvcid": "46366" 00:22:28.601 }, 00:22:28.601 "auth": { 00:22:28.601 "state": "completed", 00:22:28.601 "digest": "sha384", 00:22:28.601 "dhgroup": "ffdhe2048" 00:22:28.601 } 00:22:28.601 } 00:22:28.601 ]' 00:22:28.601 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.601 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:28.601 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.860 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:28.860 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.860 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.860 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.860 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.118 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:29.118 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.721 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.979 00:22:29.979 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.979 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.979 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.237 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.237 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.237 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.237 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.237 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.237 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.237 { 00:22:30.237 "cntlid": 59, 00:22:30.237 "qid": 0, 00:22:30.237 "state": "enabled", 00:22:30.237 "thread": "nvmf_tgt_poll_group_000", 00:22:30.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:30.237 "listen_address": { 00:22:30.237 "trtype": "TCP", 00:22:30.237 "adrfam": "IPv4", 00:22:30.237 "traddr": "10.0.0.2", 00:22:30.237 "trsvcid": "4420" 00:22:30.237 }, 00:22:30.237 "peer_address": { 00:22:30.237 "trtype": "TCP", 00:22:30.237 "adrfam": "IPv4", 00:22:30.237 "traddr": "10.0.0.1", 00:22:30.237 "trsvcid": "46402" 00:22:30.237 }, 00:22:30.237 "auth": { 00:22:30.237 "state": "completed", 00:22:30.237 "digest": "sha384", 00:22:30.237 "dhgroup": "ffdhe2048" 00:22:30.237 } 00:22:30.237 } 00:22:30.237 ]' 00:22:30.237 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.237 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:30.237 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.237 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:30.237 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.496 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.496 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.496 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.496 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:30.496 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:31.060 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.061 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:31.061 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.061 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.061 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.061 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.061 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:31.061 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.319 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.576 00:22:31.576 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.576 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.576 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.834 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.834 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.834 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.834 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.834 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.834 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.834 { 00:22:31.834 "cntlid": 61, 00:22:31.834 "qid": 0, 00:22:31.834 "state": "enabled", 00:22:31.834 "thread": "nvmf_tgt_poll_group_000", 00:22:31.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:31.834 "listen_address": { 00:22:31.834 "trtype": "TCP", 00:22:31.834 "adrfam": "IPv4", 00:22:31.834 "traddr": "10.0.0.2", 00:22:31.834 "trsvcid": "4420" 00:22:31.834 }, 00:22:31.834 "peer_address": { 00:22:31.834 "trtype": "TCP", 00:22:31.834 "adrfam": "IPv4", 00:22:31.834 "traddr": "10.0.0.1", 00:22:31.834 "trsvcid": "46428" 00:22:31.834 }, 00:22:31.834 "auth": { 00:22:31.834 "state": "completed", 00:22:31.834 "digest": "sha384", 00:22:31.834 "dhgroup": "ffdhe2048" 00:22:31.834 } 00:22:31.834 } 00:22:31.834 ]' 00:22:31.834 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.834 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:31.834 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.834 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:31.834 07:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.834 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.834 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.834 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.092 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:32.092 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:32.658 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.658 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:32.658 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.658 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.658 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.658 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.658 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:32.658 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.916 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.175 00:22:33.175 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.175 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.175 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.433 { 00:22:33.433 "cntlid": 63, 00:22:33.433 "qid": 0, 00:22:33.433 "state": "enabled", 00:22:33.433 "thread": "nvmf_tgt_poll_group_000", 00:22:33.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:33.433 "listen_address": { 00:22:33.433 "trtype": "TCP", 00:22:33.433 "adrfam": "IPv4", 00:22:33.433 "traddr": "10.0.0.2", 00:22:33.433 "trsvcid": "4420" 00:22:33.433 }, 00:22:33.433 "peer_address": { 00:22:33.433 "trtype": "TCP", 00:22:33.433 "adrfam": "IPv4", 00:22:33.433 "traddr": "10.0.0.1", 00:22:33.433 "trsvcid": "46466" 00:22:33.433 }, 00:22:33.433 "auth": { 00:22:33.433 "state": "completed", 00:22:33.433 "digest": "sha384", 00:22:33.433 "dhgroup": "ffdhe2048" 00:22:33.433 } 00:22:33.433 } 00:22:33.433 ]' 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.433 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.691 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:33.691 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:34.256 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.256 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:34.256 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.256 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.256 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.256 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.256 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.256 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.256 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.577 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.835 00:22:34.835 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.835 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.835 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.095 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.095 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.095 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.095 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.095 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.095 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.095 { 00:22:35.095 "cntlid": 65, 00:22:35.095 "qid": 0, 00:22:35.095 "state": "enabled", 00:22:35.095 "thread": "nvmf_tgt_poll_group_000", 00:22:35.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:35.095 "listen_address": { 00:22:35.095 "trtype": "TCP", 00:22:35.095 "adrfam": "IPv4", 00:22:35.095 "traddr": "10.0.0.2", 00:22:35.095 "trsvcid": "4420" 00:22:35.095 }, 00:22:35.095 "peer_address": { 00:22:35.095 "trtype": "TCP", 00:22:35.095 "adrfam": "IPv4", 00:22:35.095 "traddr": "10.0.0.1", 00:22:35.095 "trsvcid": "46498" 00:22:35.095 }, 00:22:35.095 "auth": { 00:22:35.095 "state": "completed", 00:22:35.095 "digest": "sha384", 00:22:35.095 "dhgroup": "ffdhe3072" 00:22:35.095 } 00:22:35.096 } 00:22:35.096 ]' 00:22:35.096 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.096 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:35.096 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.096 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:35.096 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.096 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.096 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.096 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.353 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:35.353 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:35.919 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.919 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:35.919 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.919 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.919 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.919 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:35.919 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:35.919 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.177 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.435 00:22:36.435 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.435 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.435 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.693 { 00:22:36.693 "cntlid": 67, 00:22:36.693 "qid": 0, 00:22:36.693 "state": "enabled", 00:22:36.693 "thread": "nvmf_tgt_poll_group_000", 00:22:36.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:36.693 "listen_address": { 00:22:36.693 "trtype": "TCP", 00:22:36.693 "adrfam": "IPv4", 00:22:36.693 "traddr": "10.0.0.2", 00:22:36.693 "trsvcid": "4420" 00:22:36.693 }, 00:22:36.693 "peer_address": { 00:22:36.693 "trtype": "TCP", 00:22:36.693 "adrfam": "IPv4", 00:22:36.693 "traddr": "10.0.0.1", 00:22:36.693 "trsvcid": "46528" 00:22:36.693 }, 00:22:36.693 "auth": { 00:22:36.693 "state": "completed", 00:22:36.693 "digest": "sha384", 00:22:36.693 "dhgroup": "ffdhe3072" 00:22:36.693 } 00:22:36.693 } 00:22:36.693 ]' 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.693 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.951 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:36.951 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:37.517 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.517 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:37.517 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.517 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.517 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.517 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.517 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:37.517 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.774 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.031 00:22:38.031 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.031 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.031 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.289 { 00:22:38.289 "cntlid": 69, 00:22:38.289 "qid": 0, 00:22:38.289 "state": "enabled", 00:22:38.289 "thread": "nvmf_tgt_poll_group_000", 00:22:38.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:38.289 "listen_address": { 00:22:38.289 "trtype": "TCP", 00:22:38.289 "adrfam": "IPv4", 00:22:38.289 "traddr": "10.0.0.2", 00:22:38.289 "trsvcid": "4420" 00:22:38.289 }, 00:22:38.289 "peer_address": { 00:22:38.289 "trtype": "TCP", 00:22:38.289 "adrfam": "IPv4", 00:22:38.289 "traddr": "10.0.0.1", 00:22:38.289 "trsvcid": "47974" 00:22:38.289 }, 00:22:38.289 "auth": { 00:22:38.289 "state": "completed", 00:22:38.289 "digest": "sha384", 00:22:38.289 "dhgroup": "ffdhe3072" 00:22:38.289 } 00:22:38.289 } 00:22:38.289 ]' 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.289 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.547 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:38.547 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:39.112 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.112 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:39.112 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.112 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.112 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.112 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.112 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:39.112 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:39.409 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:39.409 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.409 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:39.409 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:39.409 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:39.409 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.409 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:22:39.409 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.409 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.409 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.409 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:39.409 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.410 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.671 00:22:39.671 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.671 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.671 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.671 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.671 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.671 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.671 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.671 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.671 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.671 { 00:22:39.671 "cntlid": 71, 00:22:39.671 "qid": 0, 00:22:39.671 "state": "enabled", 00:22:39.671 "thread": "nvmf_tgt_poll_group_000", 00:22:39.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:39.671 "listen_address": { 00:22:39.671 "trtype": "TCP", 00:22:39.671 "adrfam": "IPv4", 00:22:39.671 "traddr": "10.0.0.2", 00:22:39.671 "trsvcid": "4420" 00:22:39.671 }, 00:22:39.671 "peer_address": { 00:22:39.671 "trtype": "TCP", 00:22:39.671 "adrfam": "IPv4", 00:22:39.671 "traddr": "10.0.0.1", 00:22:39.671 "trsvcid": "48000" 00:22:39.671 }, 00:22:39.671 "auth": { 00:22:39.671 "state": "completed", 00:22:39.671 "digest": "sha384", 00:22:39.671 "dhgroup": "ffdhe3072" 00:22:39.671 } 00:22:39.671 } 00:22:39.671 ]' 00:22:39.930 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.930 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:39.930 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.930 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:39.930 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.930 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.930 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.930 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.188 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:40.188 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:40.753 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.753 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:40.753 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.753 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.753 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.753 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.753 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:40.753 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:40.753 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.011 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.269 00:22:41.269 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.269 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.269 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.526 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.527 { 00:22:41.527 "cntlid": 73, 00:22:41.527 "qid": 0, 00:22:41.527 "state": "enabled", 00:22:41.527 "thread": "nvmf_tgt_poll_group_000", 00:22:41.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:41.527 "listen_address": { 00:22:41.527 "trtype": "TCP", 00:22:41.527 "adrfam": "IPv4", 00:22:41.527 "traddr": "10.0.0.2", 00:22:41.527 "trsvcid": "4420" 00:22:41.527 }, 00:22:41.527 "peer_address": { 00:22:41.527 "trtype": "TCP", 00:22:41.527 "adrfam": "IPv4", 00:22:41.527 "traddr": "10.0.0.1", 00:22:41.527 "trsvcid": "48018" 00:22:41.527 }, 00:22:41.527 "auth": { 00:22:41.527 "state": "completed", 00:22:41.527 "digest": "sha384", 00:22:41.527 "dhgroup": "ffdhe4096" 00:22:41.527 } 00:22:41.527 } 00:22:41.527 ]' 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.527 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.785 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:41.785 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:42.351 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.351 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:42.351 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.351 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.351 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.351 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.351 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:42.351 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.609 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.866 00:22:42.866 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.866 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.866 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.124 { 00:22:43.124 "cntlid": 75, 00:22:43.124 "qid": 0, 00:22:43.124 "state": "enabled", 00:22:43.124 "thread": "nvmf_tgt_poll_group_000", 00:22:43.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:43.124 "listen_address": { 00:22:43.124 "trtype": "TCP", 00:22:43.124 "adrfam": "IPv4", 00:22:43.124 "traddr": "10.0.0.2", 00:22:43.124 "trsvcid": "4420" 00:22:43.124 }, 00:22:43.124 "peer_address": { 00:22:43.124 "trtype": "TCP", 00:22:43.124 "adrfam": "IPv4", 00:22:43.124 "traddr": "10.0.0.1", 00:22:43.124 "trsvcid": "48038" 00:22:43.124 }, 00:22:43.124 "auth": { 00:22:43.124 "state": "completed", 00:22:43.124 "digest": "sha384", 00:22:43.124 "dhgroup": "ffdhe4096" 00:22:43.124 } 00:22:43.124 } 00:22:43.124 ]' 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.124 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.398 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:43.398 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:43.972 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.972 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:43.972 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.972 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.973 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.973 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.973 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:43.973 07:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.231 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.545 00:22:44.545 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.545 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.545 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.545 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.545 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.545 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.545 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.545 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.545 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.545 { 00:22:44.545 "cntlid": 77, 00:22:44.545 "qid": 0, 00:22:44.545 "state": "enabled", 00:22:44.545 "thread": "nvmf_tgt_poll_group_000", 00:22:44.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:44.545 "listen_address": { 00:22:44.545 "trtype": "TCP", 00:22:44.545 "adrfam": "IPv4", 00:22:44.545 "traddr": "10.0.0.2", 00:22:44.545 "trsvcid": "4420" 00:22:44.545 }, 00:22:44.545 "peer_address": { 00:22:44.545 "trtype": "TCP", 00:22:44.545 "adrfam": "IPv4", 00:22:44.545 "traddr": "10.0.0.1", 00:22:44.545 "trsvcid": "48078" 00:22:44.545 }, 00:22:44.545 "auth": { 00:22:44.545 "state": "completed", 00:22:44.545 "digest": "sha384", 00:22:44.545 "dhgroup": "ffdhe4096" 00:22:44.545 } 00:22:44.545 } 00:22:44.545 ]' 00:22:44.545 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.805 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:44.805 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.805 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:44.805 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.805 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.805 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.805 07:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.064 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:45.064 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.628 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:22:45.629 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.629 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.629 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.887 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:45.887 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.887 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.145 00:22:46.145 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.145 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.145 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.145 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.145 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.145 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.145 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.403 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.403 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.403 { 00:22:46.403 "cntlid": 79, 00:22:46.403 "qid": 0, 00:22:46.403 "state": "enabled", 00:22:46.403 "thread": "nvmf_tgt_poll_group_000", 00:22:46.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:46.403 "listen_address": { 00:22:46.403 "trtype": "TCP", 00:22:46.403 "adrfam": "IPv4", 00:22:46.403 "traddr": "10.0.0.2", 00:22:46.403 "trsvcid": "4420" 00:22:46.403 }, 00:22:46.403 "peer_address": { 00:22:46.403 "trtype": "TCP", 00:22:46.403 "adrfam": "IPv4", 00:22:46.403 "traddr": "10.0.0.1", 00:22:46.403 "trsvcid": "48110" 00:22:46.403 }, 00:22:46.403 "auth": { 00:22:46.403 "state": "completed", 00:22:46.403 "digest": "sha384", 00:22:46.403 "dhgroup": "ffdhe4096" 00:22:46.403 } 00:22:46.403 } 00:22:46.403 ]' 00:22:46.403 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.403 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:46.403 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.403 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:46.403 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.403 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.403 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.403 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.661 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:46.661 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:47.227 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.227 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:47.227 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.227 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.227 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.227 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:47.227 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.227 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:47.227 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.486 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.744 00:22:47.744 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.744 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.744 07:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.002 { 00:22:48.002 "cntlid": 81, 00:22:48.002 "qid": 0, 00:22:48.002 "state": "enabled", 00:22:48.002 "thread": "nvmf_tgt_poll_group_000", 00:22:48.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:48.002 "listen_address": { 00:22:48.002 "trtype": "TCP", 00:22:48.002 "adrfam": "IPv4", 00:22:48.002 "traddr": "10.0.0.2", 00:22:48.002 "trsvcid": "4420" 00:22:48.002 }, 00:22:48.002 "peer_address": { 00:22:48.002 "trtype": "TCP", 00:22:48.002 "adrfam": "IPv4", 00:22:48.002 "traddr": "10.0.0.1", 00:22:48.002 "trsvcid": "35442" 00:22:48.002 }, 00:22:48.002 "auth": { 00:22:48.002 "state": "completed", 00:22:48.002 "digest": "sha384", 00:22:48.002 "dhgroup": "ffdhe6144" 00:22:48.002 } 00:22:48.002 } 00:22:48.002 ]' 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.002 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.260 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:48.260 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:48.824 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.824 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:48.824 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.824 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.824 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.824 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:48.824 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:48.824 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.082 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.648 00:22:49.648 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.648 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.648 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.648 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.648 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.648 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.648 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.648 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.648 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.648 { 00:22:49.648 "cntlid": 83, 00:22:49.648 "qid": 0, 00:22:49.648 "state": "enabled", 00:22:49.648 "thread": "nvmf_tgt_poll_group_000", 00:22:49.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:49.648 "listen_address": { 00:22:49.648 "trtype": "TCP", 00:22:49.648 "adrfam": "IPv4", 00:22:49.648 "traddr": "10.0.0.2", 00:22:49.648 "trsvcid": "4420" 00:22:49.648 }, 00:22:49.648 "peer_address": { 00:22:49.648 "trtype": "TCP", 00:22:49.648 "adrfam": "IPv4", 00:22:49.648 "traddr": "10.0.0.1", 00:22:49.648 "trsvcid": "35476" 00:22:49.648 }, 00:22:49.648 "auth": { 00:22:49.648 "state": "completed", 00:22:49.648 "digest": "sha384", 00:22:49.648 "dhgroup": "ffdhe6144" 00:22:49.648 } 00:22:49.648 } 00:22:49.648 ]' 00:22:49.648 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.648 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:49.648 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.905 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:49.905 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.905 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.905 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.905 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.163 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:50.163 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:50.727 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.727 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:50.727 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.727 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.727 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.727 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.727 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:50.727 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.985 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.242 00:22:51.242 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.242 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.242 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.499 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.500 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.500 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.500 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.500 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.500 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.500 { 00:22:51.500 "cntlid": 85, 00:22:51.500 "qid": 0, 00:22:51.500 "state": "enabled", 00:22:51.500 "thread": "nvmf_tgt_poll_group_000", 00:22:51.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:51.500 "listen_address": { 00:22:51.500 "trtype": "TCP", 00:22:51.500 "adrfam": "IPv4", 00:22:51.500 "traddr": "10.0.0.2", 00:22:51.500 "trsvcid": "4420" 00:22:51.500 }, 00:22:51.500 "peer_address": { 00:22:51.500 "trtype": "TCP", 00:22:51.500 "adrfam": "IPv4", 00:22:51.500 "traddr": "10.0.0.1", 00:22:51.500 "trsvcid": "35504" 00:22:51.500 }, 00:22:51.500 "auth": { 00:22:51.500 "state": "completed", 00:22:51.500 "digest": "sha384", 00:22:51.500 "dhgroup": "ffdhe6144" 00:22:51.500 } 00:22:51.500 } 00:22:51.500 ]' 00:22:51.500 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.500 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:51.500 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.757 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:51.757 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.757 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.757 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.757 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.015 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:52.016 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.583 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.148 00:22:53.148 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.148 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.148 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.148 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.148 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.148 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.148 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.405 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.405 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.405 { 00:22:53.405 "cntlid": 87, 00:22:53.405 "qid": 0, 00:22:53.405 "state": "enabled", 00:22:53.405 "thread": "nvmf_tgt_poll_group_000", 00:22:53.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:53.405 "listen_address": { 00:22:53.405 "trtype": "TCP", 00:22:53.405 "adrfam": "IPv4", 00:22:53.405 "traddr": "10.0.0.2", 00:22:53.405 "trsvcid": "4420" 00:22:53.405 }, 00:22:53.405 "peer_address": { 00:22:53.405 "trtype": "TCP", 00:22:53.405 "adrfam": "IPv4", 00:22:53.405 "traddr": "10.0.0.1", 00:22:53.405 "trsvcid": "35536" 00:22:53.405 }, 00:22:53.405 "auth": { 00:22:53.405 "state": "completed", 00:22:53.405 "digest": "sha384", 00:22:53.405 "dhgroup": "ffdhe6144" 00:22:53.405 } 00:22:53.405 } 00:22:53.405 ]' 00:22:53.405 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.405 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:53.405 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.405 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:53.405 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.405 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.405 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.405 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.663 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:53.663 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:22:54.229 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.229 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:54.229 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.229 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.229 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.229 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.229 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.229 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:54.229 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.487 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.053 00:22:55.053 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.053 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.053 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.053 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.053 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.053 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.053 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.053 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.053 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.053 { 00:22:55.053 "cntlid": 89, 00:22:55.053 "qid": 0, 00:22:55.053 "state": "enabled", 00:22:55.053 "thread": "nvmf_tgt_poll_group_000", 00:22:55.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:55.053 "listen_address": { 00:22:55.053 "trtype": "TCP", 00:22:55.053 "adrfam": "IPv4", 00:22:55.053 "traddr": "10.0.0.2", 00:22:55.053 "trsvcid": "4420" 00:22:55.053 }, 00:22:55.053 "peer_address": { 00:22:55.053 "trtype": "TCP", 00:22:55.053 "adrfam": "IPv4", 00:22:55.053 "traddr": "10.0.0.1", 00:22:55.053 "trsvcid": "35568" 00:22:55.053 }, 00:22:55.053 "auth": { 00:22:55.053 "state": "completed", 00:22:55.053 "digest": "sha384", 00:22:55.053 "dhgroup": "ffdhe8192" 00:22:55.053 } 00:22:55.053 } 00:22:55.053 ]' 00:22:55.053 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.053 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:55.053 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.311 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:55.311 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.311 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.311 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.311 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.311 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:55.311 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:22:55.876 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.876 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:55.876 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.876 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.139 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.704 00:22:56.705 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.705 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.705 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.962 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.962 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.962 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.962 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.962 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.962 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.962 { 00:22:56.962 "cntlid": 91, 00:22:56.963 "qid": 0, 00:22:56.963 "state": "enabled", 00:22:56.963 "thread": "nvmf_tgt_poll_group_000", 00:22:56.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:56.963 "listen_address": { 00:22:56.963 "trtype": "TCP", 00:22:56.963 "adrfam": "IPv4", 00:22:56.963 "traddr": "10.0.0.2", 00:22:56.963 "trsvcid": "4420" 00:22:56.963 }, 00:22:56.963 "peer_address": { 00:22:56.963 "trtype": "TCP", 00:22:56.963 "adrfam": "IPv4", 00:22:56.963 "traddr": "10.0.0.1", 00:22:56.963 "trsvcid": "35590" 00:22:56.963 }, 00:22:56.963 "auth": { 00:22:56.963 "state": "completed", 00:22:56.963 "digest": "sha384", 00:22:56.963 "dhgroup": "ffdhe8192" 00:22:56.963 } 00:22:56.963 } 00:22:56.963 ]' 00:22:56.963 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.963 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:56.963 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.963 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:56.963 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.963 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.963 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.963 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.221 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:57.221 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:22:57.787 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.787 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:57.787 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.787 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.787 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.787 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.787 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:57.787 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.045 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.611 00:22:58.611 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.611 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.611 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.873 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.873 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.873 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.873 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.873 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.873 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.873 { 00:22:58.873 "cntlid": 93, 00:22:58.873 "qid": 0, 00:22:58.873 "state": "enabled", 00:22:58.873 "thread": "nvmf_tgt_poll_group_000", 00:22:58.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:22:58.873 "listen_address": { 00:22:58.873 "trtype": "TCP", 00:22:58.873 "adrfam": "IPv4", 00:22:58.873 "traddr": "10.0.0.2", 00:22:58.873 "trsvcid": "4420" 00:22:58.873 }, 00:22:58.873 "peer_address": { 00:22:58.873 "trtype": "TCP", 00:22:58.873 "adrfam": "IPv4", 00:22:58.873 "traddr": "10.0.0.1", 00:22:58.873 "trsvcid": "37742" 00:22:58.873 }, 00:22:58.873 "auth": { 00:22:58.873 "state": "completed", 00:22:58.873 "digest": "sha384", 00:22:58.873 "dhgroup": "ffdhe8192" 00:22:58.873 } 00:22:58.873 } 00:22:58.873 ]' 00:22:58.873 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:58.873 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:58.873 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:58.873 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:58.873 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:58.873 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.873 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.873 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.135 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:59.135 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:22:59.702 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.702 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:22:59.702 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.702 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.702 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.702 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:59.702 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:59.702 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.960 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:00.525 00:23:00.525 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.525 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.525 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.783 { 00:23:00.783 "cntlid": 95, 00:23:00.783 "qid": 0, 00:23:00.783 "state": "enabled", 00:23:00.783 "thread": "nvmf_tgt_poll_group_000", 00:23:00.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:00.783 "listen_address": { 00:23:00.783 "trtype": "TCP", 00:23:00.783 "adrfam": "IPv4", 00:23:00.783 "traddr": "10.0.0.2", 00:23:00.783 "trsvcid": "4420" 00:23:00.783 }, 00:23:00.783 "peer_address": { 00:23:00.783 "trtype": "TCP", 00:23:00.783 "adrfam": "IPv4", 00:23:00.783 "traddr": "10.0.0.1", 00:23:00.783 "trsvcid": "37766" 00:23:00.783 }, 00:23:00.783 "auth": { 00:23:00.783 "state": "completed", 00:23:00.783 "digest": "sha384", 00:23:00.783 "dhgroup": "ffdhe8192" 00:23:00.783 } 00:23:00.783 } 00:23:00.783 ]' 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.783 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.041 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:01.041 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:01.606 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.606 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:01.606 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.606 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.606 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.606 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:01.606 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.606 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:01.606 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:01.606 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.865 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.123 00:23:02.123 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.123 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.123 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.381 { 00:23:02.381 "cntlid": 97, 00:23:02.381 "qid": 0, 00:23:02.381 "state": "enabled", 00:23:02.381 "thread": "nvmf_tgt_poll_group_000", 00:23:02.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:02.381 "listen_address": { 00:23:02.381 "trtype": "TCP", 00:23:02.381 "adrfam": "IPv4", 00:23:02.381 "traddr": "10.0.0.2", 00:23:02.381 "trsvcid": "4420" 00:23:02.381 }, 00:23:02.381 "peer_address": { 00:23:02.381 "trtype": "TCP", 00:23:02.381 "adrfam": "IPv4", 00:23:02.381 "traddr": "10.0.0.1", 00:23:02.381 "trsvcid": "37806" 00:23:02.381 }, 00:23:02.381 "auth": { 00:23:02.381 "state": "completed", 00:23:02.381 "digest": "sha512", 00:23:02.381 "dhgroup": "null" 00:23:02.381 } 00:23:02.381 } 00:23:02.381 ]' 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.381 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.639 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:02.639 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:03.204 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.204 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:03.204 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.204 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.205 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.205 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.205 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:03.205 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.462 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.720 00:23:03.720 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:03.720 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.720 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.978 { 00:23:03.978 "cntlid": 99, 00:23:03.978 "qid": 0, 00:23:03.978 "state": "enabled", 00:23:03.978 "thread": "nvmf_tgt_poll_group_000", 00:23:03.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:03.978 "listen_address": { 00:23:03.978 "trtype": "TCP", 00:23:03.978 "adrfam": "IPv4", 00:23:03.978 "traddr": "10.0.0.2", 00:23:03.978 "trsvcid": "4420" 00:23:03.978 }, 00:23:03.978 "peer_address": { 00:23:03.978 "trtype": "TCP", 00:23:03.978 "adrfam": "IPv4", 00:23:03.978 "traddr": "10.0.0.1", 00:23:03.978 "trsvcid": "37834" 00:23:03.978 }, 00:23:03.978 "auth": { 00:23:03.978 "state": "completed", 00:23:03.978 "digest": "sha512", 00:23:03.978 "dhgroup": "null" 00:23:03.978 } 00:23:03.978 } 00:23:03.978 ]' 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.978 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.236 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:23:04.236 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:23:04.802 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.802 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:04.802 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.802 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.802 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.802 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:04.802 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:04.802 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.175 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.437 00:23:05.437 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.437 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.437 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:05.699 { 00:23:05.699 "cntlid": 101, 00:23:05.699 "qid": 0, 00:23:05.699 "state": "enabled", 00:23:05.699 "thread": "nvmf_tgt_poll_group_000", 00:23:05.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:05.699 "listen_address": { 00:23:05.699 "trtype": "TCP", 00:23:05.699 "adrfam": "IPv4", 00:23:05.699 "traddr": "10.0.0.2", 00:23:05.699 "trsvcid": "4420" 00:23:05.699 }, 00:23:05.699 "peer_address": { 00:23:05.699 "trtype": "TCP", 00:23:05.699 "adrfam": "IPv4", 00:23:05.699 "traddr": "10.0.0.1", 00:23:05.699 "trsvcid": "37854" 00:23:05.699 }, 00:23:05.699 "auth": { 00:23:05.699 "state": "completed", 00:23:05.699 "digest": "sha512", 00:23:05.699 "dhgroup": "null" 00:23:05.699 } 00:23:05.699 } 00:23:05.699 ]' 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.699 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.960 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:23:05.960 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:23:06.529 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.529 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:06.529 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.529 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.529 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.529 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:06.529 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:06.529 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:06.788 07:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:07.048 00:23:07.048 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:07.048 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:07.048 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.048 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.048 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.048 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.048 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.307 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.307 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:07.307 { 00:23:07.307 "cntlid": 103, 00:23:07.307 "qid": 0, 00:23:07.307 "state": "enabled", 00:23:07.307 "thread": "nvmf_tgt_poll_group_000", 00:23:07.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:07.307 "listen_address": { 00:23:07.307 "trtype": "TCP", 00:23:07.307 "adrfam": "IPv4", 00:23:07.307 "traddr": "10.0.0.2", 00:23:07.307 "trsvcid": "4420" 00:23:07.307 }, 00:23:07.307 "peer_address": { 00:23:07.307 "trtype": "TCP", 00:23:07.307 "adrfam": "IPv4", 00:23:07.307 "traddr": "10.0.0.1", 00:23:07.307 "trsvcid": "37888" 00:23:07.307 }, 00:23:07.307 "auth": { 00:23:07.307 "state": "completed", 00:23:07.307 "digest": "sha512", 00:23:07.307 "dhgroup": "null" 00:23:07.307 } 00:23:07.307 } 00:23:07.307 ]' 00:23:07.307 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:07.307 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:07.307 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:07.307 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:07.307 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:07.307 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.307 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.307 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.566 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:07.566 07:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.133 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.392 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.392 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.392 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.392 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.392 00:23:08.651 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:08.651 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.651 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:08.651 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.651 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.651 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.651 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.651 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.651 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:08.651 { 00:23:08.651 "cntlid": 105, 00:23:08.651 "qid": 0, 00:23:08.651 "state": "enabled", 00:23:08.651 "thread": "nvmf_tgt_poll_group_000", 00:23:08.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:08.651 "listen_address": { 00:23:08.651 "trtype": "TCP", 00:23:08.651 "adrfam": "IPv4", 00:23:08.651 "traddr": "10.0.0.2", 00:23:08.651 "trsvcid": "4420" 00:23:08.651 }, 00:23:08.651 "peer_address": { 00:23:08.651 "trtype": "TCP", 00:23:08.651 "adrfam": "IPv4", 00:23:08.651 "traddr": "10.0.0.1", 00:23:08.651 "trsvcid": "60690" 00:23:08.651 }, 00:23:08.651 "auth": { 00:23:08.651 "state": "completed", 00:23:08.651 "digest": "sha512", 00:23:08.651 "dhgroup": "ffdhe2048" 00:23:08.651 } 00:23:08.651 } 00:23:08.651 ]' 00:23:08.651 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:08.908 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:08.908 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:08.908 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:08.908 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:08.908 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.908 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.908 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.165 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:09.165 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.730 07:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.988 00:23:10.247 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.247 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.247 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.247 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.247 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.247 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.247 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.247 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.247 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.247 { 00:23:10.247 "cntlid": 107, 00:23:10.247 "qid": 0, 00:23:10.247 "state": "enabled", 00:23:10.247 "thread": "nvmf_tgt_poll_group_000", 00:23:10.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:10.247 "listen_address": { 00:23:10.247 "trtype": "TCP", 00:23:10.247 "adrfam": "IPv4", 00:23:10.247 "traddr": "10.0.0.2", 00:23:10.247 "trsvcid": "4420" 00:23:10.247 }, 00:23:10.247 "peer_address": { 00:23:10.247 "trtype": "TCP", 00:23:10.247 "adrfam": "IPv4", 00:23:10.247 "traddr": "10.0.0.1", 00:23:10.247 "trsvcid": "60696" 00:23:10.247 }, 00:23:10.247 "auth": { 00:23:10.247 "state": "completed", 00:23:10.247 "digest": "sha512", 00:23:10.247 "dhgroup": "ffdhe2048" 00:23:10.247 } 00:23:10.247 } 00:23:10.247 ]' 00:23:10.247 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:10.247 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.247 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:10.504 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:10.504 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.504 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.504 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.504 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.504 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:23:10.504 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:23:11.069 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.326 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.583 00:23:11.583 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:11.583 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:11.583 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.839 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.840 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.840 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.840 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.840 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.840 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:11.840 { 00:23:11.840 "cntlid": 109, 00:23:11.840 "qid": 0, 00:23:11.840 "state": "enabled", 00:23:11.840 "thread": "nvmf_tgt_poll_group_000", 00:23:11.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:11.840 "listen_address": { 00:23:11.840 "trtype": "TCP", 00:23:11.840 "adrfam": "IPv4", 00:23:11.840 "traddr": "10.0.0.2", 00:23:11.840 "trsvcid": "4420" 00:23:11.840 }, 00:23:11.840 "peer_address": { 00:23:11.840 "trtype": "TCP", 00:23:11.840 "adrfam": "IPv4", 00:23:11.840 "traddr": "10.0.0.1", 00:23:11.840 "trsvcid": "60732" 00:23:11.840 }, 00:23:11.840 "auth": { 00:23:11.840 "state": "completed", 00:23:11.840 "digest": "sha512", 00:23:11.840 "dhgroup": "ffdhe2048" 00:23:11.840 } 00:23:11.840 } 00:23:11.840 ]' 00:23:11.840 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:11.840 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:11.840 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:12.096 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:12.096 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:12.096 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.096 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.096 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.354 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:23:12.354 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:23:12.919 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.919 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:12.919 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.919 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.919 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.919 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:12.919 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:12.919 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:12.919 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.178 00:23:13.178 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:13.178 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:13.178 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.436 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.436 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.436 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.436 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.436 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.436 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:13.436 { 00:23:13.436 "cntlid": 111, 00:23:13.436 "qid": 0, 00:23:13.436 "state": "enabled", 00:23:13.436 "thread": "nvmf_tgt_poll_group_000", 00:23:13.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:13.436 "listen_address": { 00:23:13.436 "trtype": "TCP", 00:23:13.436 "adrfam": "IPv4", 00:23:13.436 "traddr": "10.0.0.2", 00:23:13.436 "trsvcid": "4420" 00:23:13.436 }, 00:23:13.436 "peer_address": { 00:23:13.436 "trtype": "TCP", 00:23:13.436 "adrfam": "IPv4", 00:23:13.436 "traddr": "10.0.0.1", 00:23:13.436 "trsvcid": "60758" 00:23:13.436 }, 00:23:13.436 "auth": { 00:23:13.436 "state": "completed", 00:23:13.436 "digest": "sha512", 00:23:13.436 "dhgroup": "ffdhe2048" 00:23:13.436 } 00:23:13.436 } 00:23:13.436 ]' 00:23:13.436 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:13.436 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:13.436 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:13.693 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:13.693 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:13.693 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.693 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.693 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.693 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:13.693 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:14.258 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.517 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.839 00:23:14.839 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:14.839 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.839 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:15.113 { 00:23:15.113 "cntlid": 113, 00:23:15.113 "qid": 0, 00:23:15.113 "state": "enabled", 00:23:15.113 "thread": "nvmf_tgt_poll_group_000", 00:23:15.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:15.113 "listen_address": { 00:23:15.113 "trtype": "TCP", 00:23:15.113 "adrfam": "IPv4", 00:23:15.113 "traddr": "10.0.0.2", 00:23:15.113 "trsvcid": "4420" 00:23:15.113 }, 00:23:15.113 "peer_address": { 00:23:15.113 "trtype": "TCP", 00:23:15.113 "adrfam": "IPv4", 00:23:15.113 "traddr": "10.0.0.1", 00:23:15.113 "trsvcid": "60778" 00:23:15.113 }, 00:23:15.113 "auth": { 00:23:15.113 "state": "completed", 00:23:15.113 "digest": "sha512", 00:23:15.113 "dhgroup": "ffdhe3072" 00:23:15.113 } 00:23:15.113 } 00:23:15.113 ]' 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.113 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.370 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:15.371 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:15.937 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.937 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:15.937 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.937 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.937 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.937 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:15.937 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:15.937 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.195 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.452 00:23:16.452 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:16.452 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:16.452 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.709 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.709 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.709 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.709 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.709 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.709 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:16.709 { 00:23:16.709 "cntlid": 115, 00:23:16.709 "qid": 0, 00:23:16.709 "state": "enabled", 00:23:16.709 "thread": "nvmf_tgt_poll_group_000", 00:23:16.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:16.709 "listen_address": { 00:23:16.710 "trtype": "TCP", 00:23:16.710 "adrfam": "IPv4", 00:23:16.710 "traddr": "10.0.0.2", 00:23:16.710 "trsvcid": "4420" 00:23:16.710 }, 00:23:16.710 "peer_address": { 00:23:16.710 "trtype": "TCP", 00:23:16.710 "adrfam": "IPv4", 00:23:16.710 "traddr": "10.0.0.1", 00:23:16.710 "trsvcid": "60806" 00:23:16.710 }, 00:23:16.710 "auth": { 00:23:16.710 "state": "completed", 00:23:16.710 "digest": "sha512", 00:23:16.710 "dhgroup": "ffdhe3072" 00:23:16.710 } 00:23:16.710 } 00:23:16.710 ]' 00:23:16.710 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:16.710 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:16.710 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:16.710 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:16.710 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:16.710 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.710 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.710 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.967 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:23:16.967 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:23:17.531 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.531 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:17.531 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.531 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.532 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.532 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:17.532 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:17.532 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.789 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.047 00:23:18.047 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:18.047 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.047 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:18.305 { 00:23:18.305 "cntlid": 117, 00:23:18.305 "qid": 0, 00:23:18.305 "state": "enabled", 00:23:18.305 "thread": "nvmf_tgt_poll_group_000", 00:23:18.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:18.305 "listen_address": { 00:23:18.305 "trtype": "TCP", 00:23:18.305 "adrfam": "IPv4", 00:23:18.305 "traddr": "10.0.0.2", 00:23:18.305 "trsvcid": "4420" 00:23:18.305 }, 00:23:18.305 "peer_address": { 00:23:18.305 "trtype": "TCP", 00:23:18.305 "adrfam": "IPv4", 00:23:18.305 "traddr": "10.0.0.1", 00:23:18.305 "trsvcid": "54572" 00:23:18.305 }, 00:23:18.305 "auth": { 00:23:18.305 "state": "completed", 00:23:18.305 "digest": "sha512", 00:23:18.305 "dhgroup": "ffdhe3072" 00:23:18.305 } 00:23:18.305 } 00:23:18.305 ]' 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.305 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.562 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:23:18.562 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:23:19.128 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.128 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:19.128 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.128 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.128 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.128 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:19.128 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:19.128 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:19.387 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:19.645 00:23:19.645 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:19.645 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:19.645 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.904 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.904 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.904 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.904 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.904 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.904 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:19.904 { 00:23:19.904 "cntlid": 119, 00:23:19.904 "qid": 0, 00:23:19.904 "state": "enabled", 00:23:19.904 "thread": "nvmf_tgt_poll_group_000", 00:23:19.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:19.904 "listen_address": { 00:23:19.904 "trtype": "TCP", 00:23:19.904 "adrfam": "IPv4", 00:23:19.904 "traddr": "10.0.0.2", 00:23:19.904 "trsvcid": "4420" 00:23:19.904 }, 00:23:19.904 "peer_address": { 00:23:19.904 "trtype": "TCP", 00:23:19.904 "adrfam": "IPv4", 00:23:19.904 "traddr": "10.0.0.1", 00:23:19.904 "trsvcid": "54610" 00:23:19.904 }, 00:23:19.904 "auth": { 00:23:19.904 "state": "completed", 00:23:19.904 "digest": "sha512", 00:23:19.904 "dhgroup": "ffdhe3072" 00:23:19.904 } 00:23:19.904 } 00:23:19.904 ]' 00:23:19.904 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:19.904 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:19.904 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:19.904 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:19.904 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:19.904 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.904 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.904 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.162 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:20.162 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:20.728 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.728 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:20.728 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.728 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.728 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.728 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:20.728 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:20.728 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:20.728 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:20.985 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:23:20.985 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.985 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:20.985 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:20.985 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:20.985 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.985 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.985 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.985 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.985 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.985 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.985 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.986 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.244 00:23:21.244 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:21.244 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:21.244 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.502 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.502 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.502 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.502 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.502 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.502 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:21.502 { 00:23:21.502 "cntlid": 121, 00:23:21.502 "qid": 0, 00:23:21.502 "state": "enabled", 00:23:21.502 "thread": "nvmf_tgt_poll_group_000", 00:23:21.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:21.502 "listen_address": { 00:23:21.502 "trtype": "TCP", 00:23:21.502 "adrfam": "IPv4", 00:23:21.502 "traddr": "10.0.0.2", 00:23:21.502 "trsvcid": "4420" 00:23:21.502 }, 00:23:21.502 "peer_address": { 00:23:21.502 "trtype": "TCP", 00:23:21.502 "adrfam": "IPv4", 00:23:21.502 "traddr": "10.0.0.1", 00:23:21.502 "trsvcid": "54640" 00:23:21.502 }, 00:23:21.502 "auth": { 00:23:21.502 "state": "completed", 00:23:21.502 "digest": "sha512", 00:23:21.502 "dhgroup": "ffdhe4096" 00:23:21.502 } 00:23:21.502 } 00:23:21.502 ]' 00:23:21.502 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:21.502 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:21.502 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:21.760 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:21.760 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:21.760 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.760 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.760 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.018 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:22.018 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.583 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.149 00:23:23.149 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:23.149 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.149 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:23.149 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.149 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:23.149 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.149 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.149 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.149 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:23.149 { 00:23:23.149 "cntlid": 123, 00:23:23.149 "qid": 0, 00:23:23.149 "state": "enabled", 00:23:23.149 "thread": "nvmf_tgt_poll_group_000", 00:23:23.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:23.149 "listen_address": { 00:23:23.149 "trtype": "TCP", 00:23:23.149 "adrfam": "IPv4", 00:23:23.149 "traddr": "10.0.0.2", 00:23:23.149 "trsvcid": "4420" 00:23:23.149 }, 00:23:23.149 "peer_address": { 00:23:23.149 "trtype": "TCP", 00:23:23.149 "adrfam": "IPv4", 00:23:23.149 "traddr": "10.0.0.1", 00:23:23.149 "trsvcid": "54662" 00:23:23.149 }, 00:23:23.149 "auth": { 00:23:23.149 "state": "completed", 00:23:23.149 "digest": "sha512", 00:23:23.149 "dhgroup": "ffdhe4096" 00:23:23.149 } 00:23:23.149 } 00:23:23.149 ]' 00:23:23.149 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:23.149 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:23.149 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:23.407 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:23.407 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:23.407 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:23.407 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.407 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.407 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:23:23.407 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.348 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.606 00:23:24.606 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:24.606 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:24.606 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.864 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.864 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.864 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.864 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.864 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.864 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:24.864 { 00:23:24.864 "cntlid": 125, 00:23:24.864 "qid": 0, 00:23:24.864 "state": "enabled", 00:23:24.864 "thread": "nvmf_tgt_poll_group_000", 00:23:24.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:24.864 "listen_address": { 00:23:24.864 "trtype": "TCP", 00:23:24.864 "adrfam": "IPv4", 00:23:24.864 "traddr": "10.0.0.2", 00:23:24.864 "trsvcid": "4420" 00:23:24.864 }, 00:23:24.864 "peer_address": { 00:23:24.864 "trtype": "TCP", 00:23:24.864 "adrfam": "IPv4", 00:23:24.864 "traddr": "10.0.0.1", 00:23:24.864 "trsvcid": "54688" 00:23:24.864 }, 00:23:24.864 "auth": { 00:23:24.864 "state": "completed", 00:23:24.864 "digest": "sha512", 00:23:24.864 "dhgroup": "ffdhe4096" 00:23:24.864 } 00:23:24.864 } 00:23:24.864 ]' 00:23:24.864 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:24.864 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:24.864 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:24.864 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:24.864 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:24.864 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.864 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.864 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.120 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:23:25.120 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:23:26.052 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.052 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:26.052 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.052 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.052 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.052 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:26.052 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:26.052 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:26.052 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:26.309 00:23:26.309 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:26.309 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.309 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:26.567 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.567 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.567 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.567 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.567 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.567 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:26.567 { 00:23:26.567 "cntlid": 127, 00:23:26.567 "qid": 0, 00:23:26.567 "state": "enabled", 00:23:26.567 "thread": "nvmf_tgt_poll_group_000", 00:23:26.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:26.567 "listen_address": { 00:23:26.567 "trtype": "TCP", 00:23:26.567 "adrfam": "IPv4", 00:23:26.567 "traddr": "10.0.0.2", 00:23:26.567 "trsvcid": "4420" 00:23:26.567 }, 00:23:26.567 "peer_address": { 00:23:26.567 "trtype": "TCP", 00:23:26.567 "adrfam": "IPv4", 00:23:26.567 "traddr": "10.0.0.1", 00:23:26.567 "trsvcid": "54708" 00:23:26.567 }, 00:23:26.567 "auth": { 00:23:26.567 "state": "completed", 00:23:26.567 "digest": "sha512", 00:23:26.567 "dhgroup": "ffdhe4096" 00:23:26.567 } 00:23:26.567 } 00:23:26.567 ]' 00:23:26.567 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:26.567 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:26.567 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:26.567 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:26.878 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:26.878 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.878 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.878 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.878 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:26.878 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:27.443 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.443 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:27.443 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.443 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.443 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.443 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.443 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:27.443 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:27.443 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:27.701 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:23:27.701 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:27.701 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:27.701 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:27.701 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:27.701 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.701 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.702 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.702 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.702 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.702 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.702 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.702 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.268 00:23:28.268 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:28.268 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.268 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:28.268 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.268 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.268 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.268 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.268 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.268 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:28.268 { 00:23:28.268 "cntlid": 129, 00:23:28.268 "qid": 0, 00:23:28.268 "state": "enabled", 00:23:28.268 "thread": "nvmf_tgt_poll_group_000", 00:23:28.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:28.268 "listen_address": { 00:23:28.268 "trtype": "TCP", 00:23:28.268 "adrfam": "IPv4", 00:23:28.268 "traddr": "10.0.0.2", 00:23:28.268 "trsvcid": "4420" 00:23:28.268 }, 00:23:28.268 "peer_address": { 00:23:28.268 "trtype": "TCP", 00:23:28.268 "adrfam": "IPv4", 00:23:28.268 "traddr": "10.0.0.1", 00:23:28.268 "trsvcid": "60976" 00:23:28.268 }, 00:23:28.268 "auth": { 00:23:28.268 "state": "completed", 00:23:28.268 "digest": "sha512", 00:23:28.268 "dhgroup": "ffdhe6144" 00:23:28.268 } 00:23:28.268 } 00:23:28.268 ]' 00:23:28.268 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:28.268 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:28.268 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:28.528 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:28.528 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:28.528 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.528 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.528 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.528 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:28.528 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:29.094 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.094 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:29.094 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.094 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.351 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.987 00:23:29.987 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:29.987 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:29.987 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.987 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.987 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.987 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.987 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.987 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.987 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:29.987 { 00:23:29.987 "cntlid": 131, 00:23:29.987 "qid": 0, 00:23:29.987 "state": "enabled", 00:23:29.987 "thread": "nvmf_tgt_poll_group_000", 00:23:29.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:29.987 "listen_address": { 00:23:29.987 "trtype": "TCP", 00:23:29.987 "adrfam": "IPv4", 00:23:29.987 "traddr": "10.0.0.2", 00:23:29.987 "trsvcid": "4420" 00:23:29.987 }, 00:23:29.987 "peer_address": { 00:23:29.987 "trtype": "TCP", 00:23:29.987 "adrfam": "IPv4", 00:23:29.987 "traddr": "10.0.0.1", 00:23:29.987 "trsvcid": "60988" 00:23:29.987 }, 00:23:29.987 "auth": { 00:23:29.987 "state": "completed", 00:23:29.987 "digest": "sha512", 00:23:29.987 "dhgroup": "ffdhe6144" 00:23:29.987 } 00:23:29.987 } 00:23:29.987 ]' 00:23:29.987 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:30.267 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:30.267 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:30.267 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:30.267 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:30.267 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.267 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.267 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.267 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:23:30.267 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:23:30.837 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.837 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:30.837 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.837 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.837 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.837 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:30.837 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:30.838 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.097 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.662 00:23:31.662 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:31.662 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:31.662 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.662 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.662 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.662 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.662 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.662 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.662 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:31.662 { 00:23:31.662 "cntlid": 133, 00:23:31.662 "qid": 0, 00:23:31.662 "state": "enabled", 00:23:31.662 "thread": "nvmf_tgt_poll_group_000", 00:23:31.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:31.662 "listen_address": { 00:23:31.662 "trtype": "TCP", 00:23:31.662 "adrfam": "IPv4", 00:23:31.662 "traddr": "10.0.0.2", 00:23:31.662 "trsvcid": "4420" 00:23:31.662 }, 00:23:31.662 "peer_address": { 00:23:31.662 "trtype": "TCP", 00:23:31.662 "adrfam": "IPv4", 00:23:31.662 "traddr": "10.0.0.1", 00:23:31.662 "trsvcid": "32792" 00:23:31.662 }, 00:23:31.662 "auth": { 00:23:31.662 "state": "completed", 00:23:31.662 "digest": "sha512", 00:23:31.662 "dhgroup": "ffdhe6144" 00:23:31.662 } 00:23:31.662 } 00:23:31.662 ]' 00:23:31.662 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:31.662 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:31.662 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:31.920 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:31.920 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:31.920 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.920 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.920 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.178 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:23:32.178 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:32.746 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:33.311 00:23:33.311 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:33.311 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:33.311 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:33.569 { 00:23:33.569 "cntlid": 135, 00:23:33.569 "qid": 0, 00:23:33.569 "state": "enabled", 00:23:33.569 "thread": "nvmf_tgt_poll_group_000", 00:23:33.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:33.569 "listen_address": { 00:23:33.569 "trtype": "TCP", 00:23:33.569 "adrfam": "IPv4", 00:23:33.569 "traddr": "10.0.0.2", 00:23:33.569 "trsvcid": "4420" 00:23:33.569 }, 00:23:33.569 "peer_address": { 00:23:33.569 "trtype": "TCP", 00:23:33.569 "adrfam": "IPv4", 00:23:33.569 "traddr": "10.0.0.1", 00:23:33.569 "trsvcid": "32820" 00:23:33.569 }, 00:23:33.569 "auth": { 00:23:33.569 "state": "completed", 00:23:33.569 "digest": "sha512", 00:23:33.569 "dhgroup": "ffdhe6144" 00:23:33.569 } 00:23:33.569 } 00:23:33.569 ]' 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:33.569 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:33.827 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:33.827 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:34.394 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:34.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:34.394 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:34.394 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.394 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.394 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.394 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.394 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:34.394 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.394 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.653 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.217 00:23:35.217 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:35.217 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.217 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:35.217 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.217 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:35.217 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.217 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.531 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.531 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:35.531 { 00:23:35.531 "cntlid": 137, 00:23:35.531 "qid": 0, 00:23:35.531 "state": "enabled", 00:23:35.531 "thread": "nvmf_tgt_poll_group_000", 00:23:35.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:35.531 "listen_address": { 00:23:35.531 "trtype": "TCP", 00:23:35.531 "adrfam": "IPv4", 00:23:35.531 "traddr": "10.0.0.2", 00:23:35.531 "trsvcid": "4420" 00:23:35.531 }, 00:23:35.531 "peer_address": { 00:23:35.531 "trtype": "TCP", 00:23:35.531 "adrfam": "IPv4", 00:23:35.531 "traddr": "10.0.0.1", 00:23:35.531 "trsvcid": "32844" 00:23:35.531 }, 00:23:35.531 "auth": { 00:23:35.531 "state": "completed", 00:23:35.531 "digest": "sha512", 00:23:35.531 "dhgroup": "ffdhe8192" 00:23:35.531 } 00:23:35.531 } 00:23:35.531 ]' 00:23:35.531 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:35.531 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:35.531 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:35.531 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:35.531 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:35.531 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:35.531 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:35.531 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:35.791 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:35.791 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:36.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.359 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.925 00:23:36.925 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.925 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.925 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:37.184 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.184 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:37.184 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.184 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.184 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.184 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:37.184 { 00:23:37.184 "cntlid": 139, 00:23:37.184 "qid": 0, 00:23:37.184 "state": "enabled", 00:23:37.184 "thread": "nvmf_tgt_poll_group_000", 00:23:37.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:37.184 "listen_address": { 00:23:37.184 "trtype": "TCP", 00:23:37.184 "adrfam": "IPv4", 00:23:37.184 "traddr": "10.0.0.2", 00:23:37.184 "trsvcid": "4420" 00:23:37.184 }, 00:23:37.184 "peer_address": { 00:23:37.184 "trtype": "TCP", 00:23:37.184 "adrfam": "IPv4", 00:23:37.184 "traddr": "10.0.0.1", 00:23:37.184 "trsvcid": "32864" 00:23:37.184 }, 00:23:37.184 "auth": { 00:23:37.184 "state": "completed", 00:23:37.184 "digest": "sha512", 00:23:37.184 "dhgroup": "ffdhe8192" 00:23:37.184 } 00:23:37.184 } 00:23:37.184 ]' 00:23:37.184 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:37.184 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:37.184 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:37.184 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:37.184 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:37.442 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:37.442 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:37.442 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:37.442 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:23:37.442 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: --dhchap-ctrl-secret DHHC-1:02:N2YzNDYzNGRiNTUzYmVmOGY2NDdhNjdlMTM1MzFkZTJhN2YwNDE5OTJiYWJiYjMyfb4DGg==: 00:23:38.008 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:38.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:38.008 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:38.008 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.008 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.008 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.008 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:38.008 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:38.008 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.361 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.928 00:23:38.928 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:38.928 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.928 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:38.928 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.928 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.928 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.928 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.928 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.928 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:38.928 { 00:23:38.928 "cntlid": 141, 00:23:38.928 "qid": 0, 00:23:38.928 "state": "enabled", 00:23:38.928 "thread": "nvmf_tgt_poll_group_000", 00:23:38.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:38.928 "listen_address": { 00:23:38.928 "trtype": "TCP", 00:23:38.928 "adrfam": "IPv4", 00:23:38.928 "traddr": "10.0.0.2", 00:23:38.928 "trsvcid": "4420" 00:23:38.928 }, 00:23:38.928 "peer_address": { 00:23:38.928 "trtype": "TCP", 00:23:38.928 "adrfam": "IPv4", 00:23:38.928 "traddr": "10.0.0.1", 00:23:38.928 "trsvcid": "60982" 00:23:38.928 }, 00:23:38.928 "auth": { 00:23:38.928 "state": "completed", 00:23:38.928 "digest": "sha512", 00:23:38.928 "dhgroup": "ffdhe8192" 00:23:38.928 } 00:23:38.928 } 00:23:38.928 ]' 00:23:39.186 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:39.186 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:39.186 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:39.186 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:39.186 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:39.186 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.186 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.186 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.443 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:23:39.443 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:01:YTZmYzI5YzExNjVjOGU1YzE0NmY3ZGVmMmI5ZjY1MjG5V+tJ: 00:23:40.010 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.010 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:40.010 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.010 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.010 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.010 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:40.010 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:40.010 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:40.268 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:40.527 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:40.785 { 00:23:40.785 "cntlid": 143, 00:23:40.785 "qid": 0, 00:23:40.785 "state": "enabled", 00:23:40.785 "thread": "nvmf_tgt_poll_group_000", 00:23:40.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:40.785 "listen_address": { 00:23:40.785 "trtype": "TCP", 00:23:40.785 "adrfam": "IPv4", 00:23:40.785 "traddr": "10.0.0.2", 00:23:40.785 "trsvcid": "4420" 00:23:40.785 }, 00:23:40.785 "peer_address": { 00:23:40.785 "trtype": "TCP", 00:23:40.785 "adrfam": "IPv4", 00:23:40.785 "traddr": "10.0.0.1", 00:23:40.785 "trsvcid": "32794" 00:23:40.785 }, 00:23:40.785 "auth": { 00:23:40.785 "state": "completed", 00:23:40.785 "digest": "sha512", 00:23:40.785 "dhgroup": "ffdhe8192" 00:23:40.785 } 00:23:40.785 } 00:23:40.785 ]' 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:40.785 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:41.043 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:41.043 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.043 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.043 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.328 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:41.329 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:41.899 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:41.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:41.899 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:41.899 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.899 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.899 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.899 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:41.899 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:41.899 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:41.899 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:41.899 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:41.899 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:42.158 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:42.158 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:42.158 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:42.158 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:42.158 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:42.158 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.159 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.159 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.159 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.159 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.159 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.159 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.159 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.724 00:23:42.724 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:42.724 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.724 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:42.983 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.983 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.983 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.983 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.983 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.983 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:42.983 { 00:23:42.983 "cntlid": 145, 00:23:42.983 "qid": 0, 00:23:42.983 "state": "enabled", 00:23:42.983 "thread": "nvmf_tgt_poll_group_000", 00:23:42.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:42.983 "listen_address": { 00:23:42.983 "trtype": "TCP", 00:23:42.983 "adrfam": "IPv4", 00:23:42.983 "traddr": "10.0.0.2", 00:23:42.983 "trsvcid": "4420" 00:23:42.983 }, 00:23:42.983 "peer_address": { 00:23:42.983 "trtype": "TCP", 00:23:42.983 "adrfam": "IPv4", 00:23:42.983 "traddr": "10.0.0.1", 00:23:42.983 "trsvcid": "32814" 00:23:42.983 }, 00:23:42.983 "auth": { 00:23:42.983 "state": "completed", 00:23:42.983 "digest": "sha512", 00:23:42.983 "dhgroup": "ffdhe8192" 00:23:42.983 } 00:23:42.983 } 00:23:42.983 ]' 00:23:42.983 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:42.983 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:42.983 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:42.983 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:42.983 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:42.983 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.983 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.983 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.241 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:43.241 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:00:YWFlNDEwYWY1NzJjMTFiMDEzNmUyMjc4YmZkYWY2YWFiNThjZjg5OTlmYjYxNmZlr0jECg==: --dhchap-ctrl-secret DHHC-1:03:NGYyYzhlOWU1Y2JlZmE0Yjk1ZmFiZWZhZGQzZmFlMzk0MTIxYjM3MzExOTU4MmY3MGY1MmI3ZmYyOTlmZDdiY0psf6g=: 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:43.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:43.849 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:44.415 request: 00:23:44.415 { 00:23:44.415 "name": "nvme0", 00:23:44.415 "trtype": "tcp", 00:23:44.415 "traddr": "10.0.0.2", 00:23:44.415 "adrfam": "ipv4", 00:23:44.415 "trsvcid": "4420", 00:23:44.415 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:44.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:44.415 "prchk_reftag": false, 00:23:44.415 "prchk_guard": false, 00:23:44.415 "hdgst": false, 00:23:44.415 "ddgst": false, 00:23:44.415 "dhchap_key": "key2", 00:23:44.415 "allow_unrecognized_csi": false, 00:23:44.415 "method": "bdev_nvme_attach_controller", 00:23:44.415 "req_id": 1 00:23:44.415 } 00:23:44.415 Got JSON-RPC error response 00:23:44.415 response: 00:23:44.415 { 00:23:44.415 "code": -5, 00:23:44.415 "message": "Input/output error" 00:23:44.415 } 00:23:44.415 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:44.415 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.415 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.415 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.415 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.416 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.981 request: 00:23:44.981 { 00:23:44.981 "name": "nvme0", 00:23:44.981 "trtype": "tcp", 00:23:44.981 "traddr": "10.0.0.2", 00:23:44.981 "adrfam": "ipv4", 00:23:44.981 "trsvcid": "4420", 00:23:44.982 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:44.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:44.982 "prchk_reftag": false, 00:23:44.982 "prchk_guard": false, 00:23:44.982 "hdgst": false, 00:23:44.982 "ddgst": false, 00:23:44.982 "dhchap_key": "key1", 00:23:44.982 "dhchap_ctrlr_key": "ckey2", 00:23:44.982 "allow_unrecognized_csi": false, 00:23:44.982 "method": "bdev_nvme_attach_controller", 00:23:44.982 "req_id": 1 00:23:44.982 } 00:23:44.982 Got JSON-RPC error response 00:23:44.982 response: 00:23:44.982 { 00:23:44.982 "code": -5, 00:23:44.982 "message": "Input/output error" 00:23:44.982 } 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.982 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.546 request: 00:23:45.546 { 00:23:45.546 "name": "nvme0", 00:23:45.546 "trtype": "tcp", 00:23:45.546 "traddr": "10.0.0.2", 00:23:45.546 "adrfam": "ipv4", 00:23:45.546 "trsvcid": "4420", 00:23:45.546 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:45.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:45.546 "prchk_reftag": false, 00:23:45.546 "prchk_guard": false, 00:23:45.546 "hdgst": false, 00:23:45.546 "ddgst": false, 00:23:45.546 "dhchap_key": "key1", 00:23:45.546 "dhchap_ctrlr_key": "ckey1", 00:23:45.546 "allow_unrecognized_csi": false, 00:23:45.546 "method": "bdev_nvme_attach_controller", 00:23:45.546 "req_id": 1 00:23:45.546 } 00:23:45.546 Got JSON-RPC error response 00:23:45.546 response: 00:23:45.546 { 00:23:45.546 "code": -5, 00:23:45.546 "message": "Input/output error" 00:23:45.546 } 00:23:45.546 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:45.546 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:45.546 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:45.546 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:45.546 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:45.546 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 66142 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 66142 ']' 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 66142 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66142 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.547 killing process with pid 66142 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66142' 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 66142 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 66142 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=68875 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 68875 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68875 ']' 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.547 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 68875 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68875 ']' 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.495 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.754 null0 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Mgo 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.n0n ]] 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.n0n 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tDh 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.o7s ]] 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.o7s 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.mfN 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ntJ ]] 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ntJ 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:46.754 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tQC 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:46.755 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:47.690 nvme0n1 00:23:47.690 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:47.690 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.690 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:47.948 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.948 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:47.948 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.948 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.948 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.948 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:47.948 { 00:23:47.948 "cntlid": 1, 00:23:47.948 "qid": 0, 00:23:47.948 "state": "enabled", 00:23:47.948 "thread": "nvmf_tgt_poll_group_000", 00:23:47.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:47.948 "listen_address": { 00:23:47.948 "trtype": "TCP", 00:23:47.948 "adrfam": "IPv4", 00:23:47.948 "traddr": "10.0.0.2", 00:23:47.948 "trsvcid": "4420" 00:23:47.948 }, 00:23:47.948 "peer_address": { 00:23:47.948 "trtype": "TCP", 00:23:47.948 "adrfam": "IPv4", 00:23:47.948 "traddr": "10.0.0.1", 00:23:47.948 "trsvcid": "46400" 00:23:47.948 }, 00:23:47.948 "auth": { 00:23:47.948 "state": "completed", 00:23:47.948 "digest": "sha512", 00:23:47.948 "dhgroup": "ffdhe8192" 00:23:47.948 } 00:23:47.948 } 00:23:47.948 ]' 00:23:47.948 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:47.948 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:47.948 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:47.948 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:47.948 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:47.948 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:47.948 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:47.949 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.206 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:48.206 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:48.772 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:48.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:48.772 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:48.772 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.772 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.772 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.772 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key3 00:23:48.772 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.772 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.772 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.772 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:48.772 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:49.031 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:49.031 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:49.031 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:49.031 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:49.031 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.031 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:49.031 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.031 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:49.031 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:49.031 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:49.288 request: 00:23:49.288 { 00:23:49.288 "name": "nvme0", 00:23:49.288 "trtype": "tcp", 00:23:49.288 "traddr": "10.0.0.2", 00:23:49.288 "adrfam": "ipv4", 00:23:49.288 "trsvcid": "4420", 00:23:49.288 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:49.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:49.288 "prchk_reftag": false, 00:23:49.288 "prchk_guard": false, 00:23:49.288 "hdgst": false, 00:23:49.288 "ddgst": false, 00:23:49.288 "dhchap_key": "key3", 00:23:49.288 "allow_unrecognized_csi": false, 00:23:49.288 "method": "bdev_nvme_attach_controller", 00:23:49.288 "req_id": 1 00:23:49.288 } 00:23:49.288 Got JSON-RPC error response 00:23:49.288 response: 00:23:49.288 { 00:23:49.288 "code": -5, 00:23:49.288 "message": "Input/output error" 00:23:49.288 } 00:23:49.288 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:49.288 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:49.288 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:49.288 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:49.288 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:49.288 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:49.288 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:49.288 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:49.546 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:49.546 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:49.546 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:49.546 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:49.546 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.546 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:49.546 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.546 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:49.546 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:49.546 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:49.804 request: 00:23:49.804 { 00:23:49.804 "name": "nvme0", 00:23:49.804 "trtype": "tcp", 00:23:49.804 "traddr": "10.0.0.2", 00:23:49.804 "adrfam": "ipv4", 00:23:49.804 "trsvcid": "4420", 00:23:49.804 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:49.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:49.804 "prchk_reftag": false, 00:23:49.804 "prchk_guard": false, 00:23:49.804 "hdgst": false, 00:23:49.804 "ddgst": false, 00:23:49.804 "dhchap_key": "key3", 00:23:49.804 "allow_unrecognized_csi": false, 00:23:49.804 "method": "bdev_nvme_attach_controller", 00:23:49.804 "req_id": 1 00:23:49.804 } 00:23:49.804 Got JSON-RPC error response 00:23:49.804 response: 00:23:49.804 { 00:23:49.804 "code": -5, 00:23:49.804 "message": "Input/output error" 00:23:49.804 } 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.804 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:50.063 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:50.320 request: 00:23:50.320 { 00:23:50.320 "name": "nvme0", 00:23:50.320 "trtype": "tcp", 00:23:50.320 "traddr": "10.0.0.2", 00:23:50.320 "adrfam": "ipv4", 00:23:50.320 "trsvcid": "4420", 00:23:50.320 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:50.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:50.320 "prchk_reftag": false, 00:23:50.320 "prchk_guard": false, 00:23:50.320 "hdgst": false, 00:23:50.320 "ddgst": false, 00:23:50.320 "dhchap_key": "key0", 00:23:50.320 "dhchap_ctrlr_key": "key1", 00:23:50.320 "allow_unrecognized_csi": false, 00:23:50.320 "method": "bdev_nvme_attach_controller", 00:23:50.320 "req_id": 1 00:23:50.320 } 00:23:50.320 Got JSON-RPC error response 00:23:50.321 response: 00:23:50.321 { 00:23:50.321 "code": -5, 00:23:50.321 "message": "Input/output error" 00:23:50.321 } 00:23:50.321 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:50.321 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:50.321 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:50.321 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:50.321 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:50.321 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:50.321 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:50.579 nvme0n1 00:23:50.579 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:50.579 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:50.579 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.838 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.838 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.838 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:51.097 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 00:23:51.097 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.097 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.097 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.097 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:51.097 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:51.097 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:52.031 nvme0n1 00:23:52.031 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:52.031 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:52.031 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.031 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.031 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:52.031 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.031 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.031 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.031 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:52.031 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:52.031 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.289 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.289 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:52.289 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid 6878406f-1821-4d15-bee4-f9cf994eb227 -l 0 --dhchap-secret DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: --dhchap-ctrl-secret DHHC-1:03:YzMxZjRkYThhYmJiMjcwY2YwZGU3OTkxZTM5M2RmNmY4NWJlZmRhOWJiMGQwOGM3Zjk0NmY4NjQzODE5ODIwNqHuS7U=: 00:23:52.855 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:52.855 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:52.855 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:52.855 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:52.855 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:52.855 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:52.855 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:52.855 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.855 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:53.113 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:53.113 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:53.113 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:53.113 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:53.113 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.113 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:53.113 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.113 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:53.113 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:53.113 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:53.677 request: 00:23:53.677 { 00:23:53.677 "name": "nvme0", 00:23:53.677 "trtype": "tcp", 00:23:53.677 "traddr": "10.0.0.2", 00:23:53.677 "adrfam": "ipv4", 00:23:53.677 "trsvcid": "4420", 00:23:53.677 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:53.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227", 00:23:53.677 "prchk_reftag": false, 00:23:53.677 "prchk_guard": false, 00:23:53.677 "hdgst": false, 00:23:53.677 "ddgst": false, 00:23:53.677 "dhchap_key": "key1", 00:23:53.677 "allow_unrecognized_csi": false, 00:23:53.677 "method": "bdev_nvme_attach_controller", 00:23:53.677 "req_id": 1 00:23:53.677 } 00:23:53.677 Got JSON-RPC error response 00:23:53.677 response: 00:23:53.677 { 00:23:53.677 "code": -5, 00:23:53.677 "message": "Input/output error" 00:23:53.677 } 00:23:53.677 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:53.677 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:53.677 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:53.677 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:53.677 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:53.677 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:53.677 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:54.242 nvme0n1 00:23:54.242 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:54.242 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.242 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:54.499 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.499 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.500 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.760 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:23:54.760 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.760 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.760 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.760 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:54.760 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:54.760 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:55.019 nvme0n1 00:23:55.019 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:55.019 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:55.019 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:55.294 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.294 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:55.294 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:55.585 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:55.585 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.585 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.585 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.585 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: '' 2s 00:23:55.585 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:55.586 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:55.586 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: 00:23:55.586 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:55.586 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:55.586 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:55.586 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: ]] 00:23:55.586 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NWEyYzNlZmUwZTY1NTQwNzI5NjI5NzQxM2JmOThhOWTO9XpG: 00:23:55.586 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:55.586 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:55.586 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: 2s 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: ]] 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Nzg5MjkwMTk0ODQwNTcyZWQzMzFmNWQ3NjhjNjNiMzA1ZWQwN2IyYWJhOGRjOTNmbIiHFQ==: 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:57.483 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:59.380 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:59.380 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:59.380 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:59.380 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:59.380 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:59.380 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:59.380 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:59.380 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:59.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:59.638 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:59.638 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.638 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.638 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.638 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:59.638 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:59.639 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:00.203 nvme0n1 00:24:00.462 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:00.462 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.462 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.462 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.462 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:00.462 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:01.034 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:24:01.034 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:24:01.034 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:01.034 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.034 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:24:01.034 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.034 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.034 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.034 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:24:01.034 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:24:01.291 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:24:01.291 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:24:01.291 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:01.548 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.548 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:01.548 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.548 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.549 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.549 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:01.549 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:01.549 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:01.549 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:01.549 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.549 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:01.549 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.549 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:01.549 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:02.114 request: 00:24:02.114 { 00:24:02.114 "name": "nvme0", 00:24:02.114 "dhchap_key": "key1", 00:24:02.114 "dhchap_ctrlr_key": "key3", 00:24:02.114 "method": "bdev_nvme_set_keys", 00:24:02.114 "req_id": 1 00:24:02.114 } 00:24:02.114 Got JSON-RPC error response 00:24:02.114 response: 00:24:02.114 { 00:24:02.114 "code": -13, 00:24:02.114 "message": "Permission denied" 00:24:02.114 } 00:24:02.114 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:02.114 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:02.114 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:02.114 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:02.114 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:02.114 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:02.114 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:02.114 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:24:02.114 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:24:03.486 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:03.487 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:03.487 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.487 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:24:03.487 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:03.487 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.487 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.487 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.487 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:03.487 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:03.487 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:04.450 nvme0n1 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:04.450 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:04.708 request: 00:24:04.708 { 00:24:04.708 "name": "nvme0", 00:24:04.708 "dhchap_key": "key2", 00:24:04.708 "dhchap_ctrlr_key": "key0", 00:24:04.708 "method": "bdev_nvme_set_keys", 00:24:04.708 "req_id": 1 00:24:04.708 } 00:24:04.708 Got JSON-RPC error response 00:24:04.708 response: 00:24:04.708 { 00:24:04.708 "code": -13, 00:24:04.708 "message": "Permission denied" 00:24:04.708 } 00:24:04.708 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:04.708 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:04.708 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:04.708 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:04.708 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:04.708 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:04.708 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:04.966 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:24:04.966 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:24:05.898 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:05.898 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:05.898 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 66169 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 66169 ']' 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 66169 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66169 00:24:06.156 killing process with pid 66169 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66169' 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 66169 00:24:06.156 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 66169 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:06.414 rmmod nvme_tcp 00:24:06.414 rmmod nvme_fabrics 00:24:06.414 rmmod nvme_keyring 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 68875 ']' 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 68875 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 68875 ']' 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 68875 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.414 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68875 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:06.672 killing process with pid 68875 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68875' 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 68875 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 68875 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@254 -- # local dev 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # continue 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # continue 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@274 -- # iptr 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-save 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-restore 00:24:06.672 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Mgo /tmp/spdk.key-sha256.tDh /tmp/spdk.key-sha384.mfN /tmp/spdk.key-sha512.tQC /tmp/spdk.key-sha512.n0n /tmp/spdk.key-sha384.o7s /tmp/spdk.key-sha256.ntJ '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:24:06.931 00:24:06.932 real 2m31.781s 00:24:06.932 user 5m58.279s 00:24:06.932 sys 0m19.642s 00:24:06.932 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.932 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.932 ************************************ 00:24:06.932 END TEST nvmf_auth_target 00:24:06.932 ************************************ 00:24:06.932 07:21:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:24:06.932 07:21:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:06.932 07:21:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:06.932 07:21:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.932 07:21:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:06.932 ************************************ 00:24:06.932 START TEST nvmf_bdevio_no_huge 00:24:06.932 ************************************ 00:24:06.932 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:06.932 * Looking for test storage... 00:24:06.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:06.932 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:06.932 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:24:06.932 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:06.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.932 --rc genhtml_branch_coverage=1 00:24:06.932 --rc genhtml_function_coverage=1 00:24:06.932 --rc genhtml_legend=1 00:24:06.932 --rc geninfo_all_blocks=1 00:24:06.932 --rc geninfo_unexecuted_blocks=1 00:24:06.932 00:24:06.932 ' 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:06.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.932 --rc genhtml_branch_coverage=1 00:24:06.932 --rc genhtml_function_coverage=1 00:24:06.932 --rc genhtml_legend=1 00:24:06.932 --rc geninfo_all_blocks=1 00:24:06.932 --rc geninfo_unexecuted_blocks=1 00:24:06.932 00:24:06.932 ' 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:06.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.932 --rc genhtml_branch_coverage=1 00:24:06.932 --rc genhtml_function_coverage=1 00:24:06.932 --rc genhtml_legend=1 00:24:06.932 --rc geninfo_all_blocks=1 00:24:06.932 --rc geninfo_unexecuted_blocks=1 00:24:06.932 00:24:06.932 ' 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:06.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.932 --rc genhtml_branch_coverage=1 00:24:06.932 --rc genhtml_function_coverage=1 00:24:06.932 --rc genhtml_legend=1 00:24:06.932 --rc geninfo_all_blocks=1 00:24:06.932 --rc geninfo_unexecuted_blocks=1 00:24:06.932 00:24:06.932 ' 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.932 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:06.933 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # remove_target_ns 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@280 -- # nvmf_veth_init 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@223 -- # create_target_ns 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # create_main_bridge 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@105 -- # delete_main_bridge 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # return 0 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:06.933 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up initiator0 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up target0 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target0 up 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up target0_br 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # add_to_ns target0 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:24:06.934 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:07.195 10.0.0.1 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:24:07.195 10.0.0.2 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@66 -- # set_up initiator0 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up target0_br 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:24:07.195 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up initiator1 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up target1 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target1 up 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up target1_br 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # add_to_ns target1 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772163 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:24:07.196 10.0.0.3 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772164 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:24:07.196 10.0.0.4 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@66 -- # set_up initiator1 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.196 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up target1_br 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 2 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator0 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:07.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:24:07.197 00:24:07.197 --- 10.0.0.1 ping statistics --- 00:24:07.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.197 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target0 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target0 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:07.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.026 ms 00:24:07.197 00:24:07.197 --- 10.0.0.2 ping statistics --- 00:24:07.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.197 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator1 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:07.197 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:24:07.198 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:07.198 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:24:07.198 00:24:07.198 --- 10.0.0.3 ping statistics --- 00:24:07.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.198 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:24:07.198 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:07.198 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.136 ms 00:24:07.198 00:24:07.198 --- 10.0.0.4 ping statistics --- 00:24:07.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.198 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # return 0 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator0 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:07.198 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target0 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target0 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target1 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target1 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target1 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:24:07.199 ' 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # nvmfpid=69473 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # waitforlisten 69473 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 69473 ']' 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:07.199 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:07.457 [2024-11-20 07:21:31.423323] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:07.457 [2024-11-20 07:21:31.423379] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:07.457 [2024-11-20 07:21:31.561484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:07.457 [2024-11-20 07:21:31.611146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.457 [2024-11-20 07:21:31.611189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.457 [2024-11-20 07:21:31.611196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.457 [2024-11-20 07:21:31.611201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.457 [2024-11-20 07:21:31.611205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.457 [2024-11-20 07:21:31.611817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:07.457 [2024-11-20 07:21:31.612026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:07.457 [2024-11-20 07:21:31.612092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:07.457 [2024-11-20 07:21:31.612158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:07.457 [2024-11-20 07:21:31.616880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:08.390 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:08.391 [2024-11-20 07:21:32.313351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:08.391 Malloc0 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:08.391 [2024-11-20 07:21:32.349847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # config=() 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # local subsystem config 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:08.391 { 00:24:08.391 "params": { 00:24:08.391 "name": "Nvme$subsystem", 00:24:08.391 "trtype": "$TEST_TRANSPORT", 00:24:08.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.391 "adrfam": "ipv4", 00:24:08.391 "trsvcid": "$NVMF_PORT", 00:24:08.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.391 "hdgst": ${hdgst:-false}, 00:24:08.391 "ddgst": ${ddgst:-false} 00:24:08.391 }, 00:24:08.391 "method": "bdev_nvme_attach_controller" 00:24:08.391 } 00:24:08.391 EOF 00:24:08.391 )") 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # cat 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # jq . 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@397 -- # IFS=, 00:24:08.391 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:24:08.391 "params": { 00:24:08.391 "name": "Nvme1", 00:24:08.391 "trtype": "tcp", 00:24:08.391 "traddr": "10.0.0.2", 00:24:08.391 "adrfam": "ipv4", 00:24:08.391 "trsvcid": "4420", 00:24:08.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:08.391 "hdgst": false, 00:24:08.391 "ddgst": false 00:24:08.391 }, 00:24:08.391 "method": "bdev_nvme_attach_controller" 00:24:08.391 }' 00:24:08.391 [2024-11-20 07:21:32.391052] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:08.391 [2024-11-20 07:21:32.391113] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid69509 ] 00:24:08.391 [2024-11-20 07:21:32.536440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:08.391 [2024-11-20 07:21:32.586033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.391 [2024-11-20 07:21:32.586514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.391 [2024-11-20 07:21:32.586515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.650 [2024-11-20 07:21:32.599471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:08.650 I/O targets: 00:24:08.650 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:08.650 00:24:08.650 00:24:08.650 CUnit - A unit testing framework for C - Version 2.1-3 00:24:08.650 http://cunit.sourceforge.net/ 00:24:08.650 00:24:08.650 00:24:08.650 Suite: bdevio tests on: Nvme1n1 00:24:08.650 Test: blockdev write read block ...passed 00:24:08.650 Test: blockdev write zeroes read block ...passed 00:24:08.650 Test: blockdev write zeroes read no split ...passed 00:24:08.650 Test: blockdev write zeroes read split ...passed 00:24:08.650 Test: blockdev write zeroes read split partial ...passed 00:24:08.650 Test: blockdev reset ...[2024-11-20 07:21:32.776912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:08.650 [2024-11-20 07:21:32.777432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1058310 (9): Bad file descriptor 00:24:08.650 [2024-11-20 07:21:32.794154] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:24:08.650 passed 00:24:08.650 Test: blockdev write read 8 blocks ...passed 00:24:08.650 Test: blockdev write read size > 128k ...passed 00:24:08.650 Test: blockdev write read invalid size ...passed 00:24:08.650 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:08.650 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:08.650 Test: blockdev write read max offset ...passed 00:24:08.650 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:08.650 Test: blockdev writev readv 8 blocks ...passed 00:24:08.650 Test: blockdev writev readv 30 x 1block ...passed 00:24:08.650 Test: blockdev writev readv block ...passed 00:24:08.650 Test: blockdev writev readv size > 128k ...passed 00:24:08.650 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:08.650 Test: blockdev comparev and writev ...[2024-11-20 07:21:32.799660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:08.650 [2024-11-20 07:21:32.799693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.650 [2024-11-20 07:21:32.799707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:08.650 [2024-11-20 07:21:32.799713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.650 [2024-11-20 07:21:32.800216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:08.650 [2024-11-20 07:21:32.800242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.650 [2024-11-20 07:21:32.800254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:08.650 [2024-11-20 07:21:32.800260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.650 [2024-11-20 07:21:32.800578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:08.650 [2024-11-20 07:21:32.800594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.650 [2024-11-20 07:21:32.800606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:08.650 [2024-11-20 07:21:32.800612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.650 [2024-11-20 07:21:32.801024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:08.650 [2024-11-20 07:21:32.801041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.650 [2024-11-20 07:21:32.801054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:08.650 [2024-11-20 07:21:32.801060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.650 passed 00:24:08.650 Test: blockdev nvme passthru rw ...passed 00:24:08.650 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:21:32.801569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:08.650 [2024-11-20 07:21:32.801589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.650 [2024-11-20 07:21:32.801664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:08.650 [2024-11-20 07:21:32.801676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.650 [2024-11-20 07:21:32.801752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:08.650 [2024-11-20 07:21:32.801760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.650 [2024-11-20 07:21:32.801823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:08.650 [2024-11-20 07:21:32.801835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.650 passed 00:24:08.650 Test: blockdev nvme admin passthru ...passed 00:24:08.650 Test: blockdev copy ...passed 00:24:08.650 00:24:08.650 Run Summary: Type Total Ran Passed Failed Inactive 00:24:08.650 suites 1 1 n/a 0 0 00:24:08.650 tests 23 23 23 0 0 00:24:08.650 asserts 152 152 152 0 n/a 00:24:08.650 00:24:08.650 Elapsed time = 0.142 seconds 00:24:08.908 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.908 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.908 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:08.908 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.908 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:08.908 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:24:08.908 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:08.908 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:24:08.908 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:08.908 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:24:08.908 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:08.908 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:09.167 rmmod nvme_tcp 00:24:09.167 rmmod nvme_fabrics 00:24:09.167 rmmod nvme_keyring 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # '[' -n 69473 ']' 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # killprocess 69473 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 69473 ']' 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 69473 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69473 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69473' 00:24:09.167 killing process with pid 69473 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 69473 00:24:09.167 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 69473 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # nvmf_fini 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@254 -- # local dev 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # continue 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # continue 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@274 -- # iptr 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-save 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-restore 00:24:09.451 00:24:09.451 real 0m2.706s 00:24:09.451 user 0m8.398s 00:24:09.451 sys 0m0.998s 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.451 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:09.451 ************************************ 00:24:09.451 END TEST nvmf_bdevio_no_huge 00:24:09.451 ************************************ 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:09.725 ************************************ 00:24:09.725 START TEST nvmf_tls 00:24:09.725 ************************************ 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:09.725 * Looking for test storage... 00:24:09.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:24:09.725 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:09.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.726 --rc genhtml_branch_coverage=1 00:24:09.726 --rc genhtml_function_coverage=1 00:24:09.726 --rc genhtml_legend=1 00:24:09.726 --rc geninfo_all_blocks=1 00:24:09.726 --rc geninfo_unexecuted_blocks=1 00:24:09.726 00:24:09.726 ' 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:09.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.726 --rc genhtml_branch_coverage=1 00:24:09.726 --rc genhtml_function_coverage=1 00:24:09.726 --rc genhtml_legend=1 00:24:09.726 --rc geninfo_all_blocks=1 00:24:09.726 --rc geninfo_unexecuted_blocks=1 00:24:09.726 00:24:09.726 ' 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:09.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.726 --rc genhtml_branch_coverage=1 00:24:09.726 --rc genhtml_function_coverage=1 00:24:09.726 --rc genhtml_legend=1 00:24:09.726 --rc geninfo_all_blocks=1 00:24:09.726 --rc geninfo_unexecuted_blocks=1 00:24:09.726 00:24:09.726 ' 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:09.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.726 --rc genhtml_branch_coverage=1 00:24:09.726 --rc genhtml_function_coverage=1 00:24:09.726 --rc genhtml_legend=1 00:24:09.726 --rc geninfo_all_blocks=1 00:24:09.726 --rc geninfo_unexecuted_blocks=1 00:24:09.726 00:24:09.726 ' 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:09.726 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # remove_target_ns 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@280 -- # nvmf_veth_init 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@223 -- # create_target_ns 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:09.726 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # create_main_bridge 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@105 -- # delete_main_bridge 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # return 0 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up initiator0 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up target0 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target0 up 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up target0_br 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # add_to_ns target0 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:24:09.727 10.0.0.1 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:24:09.727 10.0.0.2 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@66 -- # set_up initiator0 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:24:09.727 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up target0_br 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:09.987 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up initiator1 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up target1 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target1 up 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up target1_br 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # add_to_ns target1 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772163 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:24:09.988 10.0.0.3 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772164 00:24:09.988 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:24:09.988 10.0.0.4 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@66 -- # set_up initiator1 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up target1_br 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 2 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator0 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:09.988 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:09.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:24:09.989 00:24:09.989 --- 10.0.0.1 ping statistics --- 00:24:09.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.989 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target0 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target0 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:09.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:24:09.989 00:24:09.989 --- 10.0.0.2 ping statistics --- 00:24:09.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.989 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:24:09.989 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:09.989 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:24:09.989 00:24:09.989 --- 10.0.0.3 ping statistics --- 00:24:09.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.989 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:24:09.989 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:09.989 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:24:09.989 00:24:09.989 --- 10.0.0.4 ping statistics --- 00:24:09.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.989 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # return 0 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator0 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:09.989 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator1 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target0 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target0 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target1 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target1 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target1 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:24:09.990 ' 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=69740 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 69740 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 69740 ']' 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.990 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.248 [2024-11-20 07:21:34.214294] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:10.248 [2024-11-20 07:21:34.214347] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.248 [2024-11-20 07:21:34.358393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.248 [2024-11-20 07:21:34.392975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.248 [2024-11-20 07:21:34.393011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.248 [2024-11-20 07:21:34.393017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.248 [2024-11-20 07:21:34.393023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.248 [2024-11-20 07:21:34.393028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.248 [2024-11-20 07:21:34.393298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.180 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.180 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:11.180 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:11.180 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.180 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.180 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.180 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:24:11.180 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:11.180 true 00:24:11.180 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:11.180 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:24:11.438 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:24:11.438 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:24:11.438 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:11.696 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:11.696 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:24:12.011 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:24:12.011 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:24:12.011 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:12.011 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:12.011 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:24:12.292 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:24:12.292 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:24:12.292 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:12.292 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:24:12.292 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:24:12.292 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:24:12.292 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:12.549 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:12.549 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:24:12.807 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:24:12.807 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:24:12.807 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:13.064 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:13.064 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=ffeeddccbbaa99887766554433221100 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.NTlhK7Yymh 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Cx6JdcT6gm 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.NTlhK7Yymh 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Cx6JdcT6gm 00:24:13.323 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:13.581 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:24:13.839 [2024-11-20 07:21:37.838638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:13.839 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.NTlhK7Yymh 00:24:13.839 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NTlhK7Yymh 00:24:13.839 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:14.096 [2024-11-20 07:21:38.068627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.096 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:14.096 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:14.355 [2024-11-20 07:21:38.480701] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.355 [2024-11-20 07:21:38.480847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.355 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:14.613 malloc0 00:24:14.613 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:14.871 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NTlhK7Yymh 00:24:15.128 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.128 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NTlhK7Yymh 00:24:27.339 Initializing NVMe Controllers 00:24:27.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:27.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:27.339 Initialization complete. Launching workers. 00:24:27.339 ======================================================== 00:24:27.339 Latency(us) 00:24:27.339 Device Information : IOPS MiB/s Average min max 00:24:27.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16442.10 64.23 3892.78 1100.24 11132.38 00:24:27.339 ======================================================== 00:24:27.339 Total : 16442.10 64.23 3892.78 1100.24 11132.38 00:24:27.339 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NTlhK7Yymh 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NTlhK7Yymh 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=69967 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 69967 /var/tmp/bdevperf.sock 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 69967 ']' 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.339 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.339 [2024-11-20 07:21:49.532943] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:27.339 [2024-11-20 07:21:49.533011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69967 ] 00:24:27.339 [2024-11-20 07:21:49.673569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.339 [2024-11-20 07:21:49.709322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.339 [2024-11-20 07:21:49.739877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:27.339 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.339 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:27.339 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NTlhK7Yymh 00:24:27.339 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:27.339 [2024-11-20 07:21:50.800254] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.339 TLSTESTn1 00:24:27.339 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:27.339 Running I/O for 10 seconds... 00:24:28.837 6294.00 IOPS, 24.59 MiB/s [2024-11-20T07:21:54.413Z] 6361.50 IOPS, 24.85 MiB/s [2024-11-20T07:21:54.979Z] 6464.33 IOPS, 25.25 MiB/s [2024-11-20T07:21:56.388Z] 6635.25 IOPS, 25.92 MiB/s [2024-11-20T07:21:57.331Z] 6735.20 IOPS, 26.31 MiB/s [2024-11-20T07:21:58.264Z] 6793.50 IOPS, 26.54 MiB/s [2024-11-20T07:21:59.198Z] 6843.29 IOPS, 26.73 MiB/s [2024-11-20T07:22:00.131Z] 6885.38 IOPS, 26.90 MiB/s [2024-11-20T07:22:01.134Z] 6918.56 IOPS, 27.03 MiB/s [2024-11-20T07:22:01.134Z] 6938.30 IOPS, 27.10 MiB/s 00:24:36.931 Latency(us) 00:24:36.931 [2024-11-20T07:22:01.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.931 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:36.931 Verification LBA range: start 0x0 length 0x2000 00:24:36.931 TLSTESTn1 : 10.01 6944.28 27.13 0.00 0.00 18402.82 3579.27 16131.94 00:24:36.931 [2024-11-20T07:22:01.134Z] =================================================================================================================== 00:24:36.931 [2024-11-20T07:22:01.134Z] Total : 6944.28 27.13 0.00 0.00 18402.82 3579.27 16131.94 00:24:36.931 { 00:24:36.931 "results": [ 00:24:36.931 { 00:24:36.931 "job": "TLSTESTn1", 00:24:36.931 "core_mask": "0x4", 00:24:36.931 "workload": "verify", 00:24:36.931 "status": "finished", 00:24:36.931 "verify_range": { 00:24:36.931 "start": 0, 00:24:36.931 "length": 8192 00:24:36.931 }, 00:24:36.931 "queue_depth": 128, 00:24:36.931 "io_size": 4096, 00:24:36.931 "runtime": 10.009819, 00:24:36.931 "iops": 6944.281410083439, 00:24:36.931 "mibps": 27.126099258138435, 00:24:36.931 "io_failed": 0, 00:24:36.931 "io_timeout": 0, 00:24:36.931 "avg_latency_us": 18402.82124157438, 00:24:36.931 "min_latency_us": 3579.273846153846, 00:24:36.931 "max_latency_us": 16131.938461538462 00:24:36.931 } 00:24:36.931 ], 00:24:36.931 "core_count": 1 00:24:36.931 } 00:24:36.931 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.931 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 69967 00:24:36.931 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 69967 ']' 00:24:36.931 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 69967 00:24:36.931 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69967 00:24:36.931 killing process with pid 69967 00:24:36.931 Received shutdown signal, test time was about 10.000000 seconds 00:24:36.931 00:24:36.931 Latency(us) 00:24:36.931 [2024-11-20T07:22:01.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.931 [2024-11-20T07:22:01.134Z] =================================================================================================================== 00:24:36.931 [2024-11-20T07:22:01.134Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69967' 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 69967 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 69967 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cx6JdcT6gm 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cx6JdcT6gm 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cx6JdcT6gm 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Cx6JdcT6gm 00:24:36.931 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:37.190 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70111 00:24:37.190 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:37.190 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70111 /var/tmp/bdevperf.sock 00:24:37.190 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70111 ']' 00:24:37.190 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:37.190 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.190 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.190 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.190 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.190 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.190 [2024-11-20 07:22:01.167262] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:37.190 [2024-11-20 07:22:01.167330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70111 ] 00:24:37.190 [2024-11-20 07:22:01.307429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.190 [2024-11-20 07:22:01.338839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.190 [2024-11-20 07:22:01.366973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:38.123 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.123 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:38.123 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Cx6JdcT6gm 00:24:38.123 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:38.382 [2024-11-20 07:22:02.429567] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:38.382 [2024-11-20 07:22:02.433579] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:38.382 [2024-11-20 07:22:02.434315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db5fb0 (107): Transport endpoint is not connected 00:24:38.382 [2024-11-20 07:22:02.435307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db5fb0 (9): Bad file descriptor 00:24:38.382 [2024-11-20 07:22:02.436306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:38.382 [2024-11-20 07:22:02.436320] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:38.382 [2024-11-20 07:22:02.436326] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:38.382 [2024-11-20 07:22:02.436333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:38.382 request: 00:24:38.382 { 00:24:38.382 "name": "TLSTEST", 00:24:38.382 "trtype": "tcp", 00:24:38.382 "traddr": "10.0.0.2", 00:24:38.382 "adrfam": "ipv4", 00:24:38.382 "trsvcid": "4420", 00:24:38.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:38.382 "prchk_reftag": false, 00:24:38.382 "prchk_guard": false, 00:24:38.382 "hdgst": false, 00:24:38.382 "ddgst": false, 00:24:38.382 "psk": "key0", 00:24:38.382 "allow_unrecognized_csi": false, 00:24:38.382 "method": "bdev_nvme_attach_controller", 00:24:38.382 "req_id": 1 00:24:38.382 } 00:24:38.382 Got JSON-RPC error response 00:24:38.382 response: 00:24:38.382 { 00:24:38.382 "code": -5, 00:24:38.382 "message": "Input/output error" 00:24:38.382 } 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70111 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70111 ']' 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70111 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70111 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:38.382 killing process with pid 70111 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70111' 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70111 00:24:38.382 Received shutdown signal, test time was about 10.000000 seconds 00:24:38.382 00:24:38.382 Latency(us) 00:24:38.382 [2024-11-20T07:22:02.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.382 [2024-11-20T07:22:02.585Z] =================================================================================================================== 00:24:38.382 [2024-11-20T07:22:02.585Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70111 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NTlhK7Yymh 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NTlhK7Yymh 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NTlhK7Yymh 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NTlhK7Yymh 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70134 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70134 /var/tmp/bdevperf.sock 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70134 ']' 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.382 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.640 [2024-11-20 07:22:02.608498] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:38.640 [2024-11-20 07:22:02.608568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70134 ] 00:24:38.640 [2024-11-20 07:22:02.737607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.640 [2024-11-20 07:22:02.768000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.640 [2024-11-20 07:22:02.795800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:39.596 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.596 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:39.596 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NTlhK7Yymh 00:24:39.596 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:39.854 [2024-11-20 07:22:03.856342] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.854 [2024-11-20 07:22:03.862711] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:39.854 [2024-11-20 07:22:03.862739] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:39.854 [2024-11-20 07:22:03.862768] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:39.854 [2024-11-20 07:22:03.863096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc54fb0 (107): Transport endpoint is not connected 00:24:39.854 [2024-11-20 07:22:03.864089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc54fb0 (9): Bad file descriptor 00:24:39.854 [2024-11-20 07:22:03.865087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:39.854 [2024-11-20 07:22:03.865102] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:39.854 [2024-11-20 07:22:03.865108] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:39.854 [2024-11-20 07:22:03.865115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:39.854 request: 00:24:39.854 { 00:24:39.854 "name": "TLSTEST", 00:24:39.854 "trtype": "tcp", 00:24:39.854 "traddr": "10.0.0.2", 00:24:39.854 "adrfam": "ipv4", 00:24:39.854 "trsvcid": "4420", 00:24:39.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.854 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:39.854 "prchk_reftag": false, 00:24:39.854 "prchk_guard": false, 00:24:39.854 "hdgst": false, 00:24:39.854 "ddgst": false, 00:24:39.854 "psk": "key0", 00:24:39.854 "allow_unrecognized_csi": false, 00:24:39.854 "method": "bdev_nvme_attach_controller", 00:24:39.854 "req_id": 1 00:24:39.854 } 00:24:39.854 Got JSON-RPC error response 00:24:39.854 response: 00:24:39.854 { 00:24:39.854 "code": -5, 00:24:39.854 "message": "Input/output error" 00:24:39.854 } 00:24:39.854 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70134 00:24:39.854 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70134 ']' 00:24:39.854 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70134 00:24:39.854 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:39.854 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.854 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70134 00:24:39.854 killing process with pid 70134 00:24:39.854 Received shutdown signal, test time was about 10.000000 seconds 00:24:39.854 00:24:39.854 Latency(us) 00:24:39.854 [2024-11-20T07:22:04.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.854 [2024-11-20T07:22:04.057Z] =================================================================================================================== 00:24:39.854 [2024-11-20T07:22:04.057Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:39.854 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:39.854 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:39.854 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70134' 00:24:39.854 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70134 00:24:39.854 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70134 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NTlhK7Yymh 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NTlhK7Yymh 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:39.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NTlhK7Yymh 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NTlhK7Yymh 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70158 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70158 /var/tmp/bdevperf.sock 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70158 ']' 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.854 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.855 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.855 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.855 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.855 [2024-11-20 07:22:04.048236] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:39.855 [2024-11-20 07:22:04.048290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70158 ] 00:24:40.113 [2024-11-20 07:22:04.179078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.113 [2024-11-20 07:22:04.210666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.113 [2024-11-20 07:22:04.239743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:41.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:41.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NTlhK7Yymh 00:24:41.047 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:41.305 [2024-11-20 07:22:05.306291] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:41.305 [2024-11-20 07:22:05.312843] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:41.305 [2024-11-20 07:22:05.312868] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:41.305 [2024-11-20 07:22:05.312896] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:41.305 [2024-11-20 07:22:05.312971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2464fb0 (107): Transport endpoint is not connected 00:24:41.305 [2024-11-20 07:22:05.313964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2464fb0 (9): Bad file descriptor 00:24:41.305 [2024-11-20 07:22:05.314964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:24:41.305 [2024-11-20 07:22:05.314978] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:41.305 [2024-11-20 07:22:05.314983] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:41.305 [2024-11-20 07:22:05.314990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:24:41.305 request: 00:24:41.305 { 00:24:41.305 "name": "TLSTEST", 00:24:41.305 "trtype": "tcp", 00:24:41.305 "traddr": "10.0.0.2", 00:24:41.305 "adrfam": "ipv4", 00:24:41.305 "trsvcid": "4420", 00:24:41.305 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:41.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:41.305 "prchk_reftag": false, 00:24:41.305 "prchk_guard": false, 00:24:41.305 "hdgst": false, 00:24:41.305 "ddgst": false, 00:24:41.305 "psk": "key0", 00:24:41.305 "allow_unrecognized_csi": false, 00:24:41.305 "method": "bdev_nvme_attach_controller", 00:24:41.305 "req_id": 1 00:24:41.305 } 00:24:41.305 Got JSON-RPC error response 00:24:41.305 response: 00:24:41.305 { 00:24:41.305 "code": -5, 00:24:41.305 "message": "Input/output error" 00:24:41.305 } 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70158 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70158 ']' 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70158 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70158 00:24:41.305 killing process with pid 70158 00:24:41.305 Received shutdown signal, test time was about 10.000000 seconds 00:24:41.305 00:24:41.305 Latency(us) 00:24:41.305 [2024-11-20T07:22:05.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.305 [2024-11-20T07:22:05.508Z] =================================================================================================================== 00:24:41.305 [2024-11-20T07:22:05.508Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70158' 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70158 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70158 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:41.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70191 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70191 /var/tmp/bdevperf.sock 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70191 ']' 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:41.305 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.305 [2024-11-20 07:22:05.485097] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:41.305 [2024-11-20 07:22:05.485303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70191 ] 00:24:41.563 [2024-11-20 07:22:05.620137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.563 [2024-11-20 07:22:05.652387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:41.563 [2024-11-20 07:22:05.682382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:41.563 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.563 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:41.563 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:41.821 [2024-11-20 07:22:05.909729] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:41.821 [2024-11-20 07:22:05.909874] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:41.821 request: 00:24:41.821 { 00:24:41.821 "name": "key0", 00:24:41.821 "path": "", 00:24:41.821 "method": "keyring_file_add_key", 00:24:41.821 "req_id": 1 00:24:41.821 } 00:24:41.821 Got JSON-RPC error response 00:24:41.821 response: 00:24:41.821 { 00:24:41.821 "code": -1, 00:24:41.821 "message": "Operation not permitted" 00:24:41.821 } 00:24:41.821 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:42.079 [2024-11-20 07:22:06.105852] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:42.079 [2024-11-20 07:22:06.105989] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:42.079 request: 00:24:42.079 { 00:24:42.079 "name": "TLSTEST", 00:24:42.079 "trtype": "tcp", 00:24:42.079 "traddr": "10.0.0.2", 00:24:42.079 "adrfam": "ipv4", 00:24:42.079 "trsvcid": "4420", 00:24:42.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:42.079 "prchk_reftag": false, 00:24:42.079 "prchk_guard": false, 00:24:42.079 "hdgst": false, 00:24:42.079 "ddgst": false, 00:24:42.079 "psk": "key0", 00:24:42.079 "allow_unrecognized_csi": false, 00:24:42.079 "method": "bdev_nvme_attach_controller", 00:24:42.079 "req_id": 1 00:24:42.079 } 00:24:42.079 Got JSON-RPC error response 00:24:42.079 response: 00:24:42.079 { 00:24:42.079 "code": -126, 00:24:42.079 "message": "Required key not available" 00:24:42.079 } 00:24:42.079 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70191 00:24:42.079 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70191 ']' 00:24:42.079 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70191 00:24:42.079 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:42.079 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70191 00:24:42.080 killing process with pid 70191 00:24:42.080 Received shutdown signal, test time was about 10.000000 seconds 00:24:42.080 00:24:42.080 Latency(us) 00:24:42.080 [2024-11-20T07:22:06.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.080 [2024-11-20T07:22:06.283Z] =================================================================================================================== 00:24:42.080 [2024-11-20T07:22:06.283Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70191' 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70191 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70191 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 69740 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 69740 ']' 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 69740 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69740 00:24:42.080 killing process with pid 69740 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69740' 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 69740 00:24:42.080 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 69740 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=2 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.VmZpFsi8I2 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.VmZpFsi8I2 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=70222 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 70222 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70222 ']' 00:24:42.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.357 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.357 [2024-11-20 07:22:06.456653] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:42.357 [2024-11-20 07:22:06.456988] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.690 [2024-11-20 07:22:06.593852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.690 [2024-11-20 07:22:06.624324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.690 [2024-11-20 07:22:06.624486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.690 [2024-11-20 07:22:06.624536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.690 [2024-11-20 07:22:06.624558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.690 [2024-11-20 07:22:06.624570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.690 [2024-11-20 07:22:06.624828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.690 [2024-11-20 07:22:06.653072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:42.690 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.690 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:42.690 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:42.690 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.690 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.690 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.690 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.VmZpFsi8I2 00:24:42.690 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VmZpFsi8I2 00:24:42.690 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:42.949 [2024-11-20 07:22:06.914863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.949 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:42.949 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:43.208 [2024-11-20 07:22:07.242907] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.208 [2024-11-20 07:22:07.243057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.208 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:43.465 malloc0 00:24:43.465 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:43.722 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VmZpFsi8I2 00:24:43.722 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:43.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VmZpFsi8I2 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VmZpFsi8I2 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70264 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70264 /var/tmp/bdevperf.sock 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70264 ']' 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.981 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.981 [2024-11-20 07:22:08.125113] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:43.981 [2024-11-20 07:22:08.125351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70264 ] 00:24:44.238 [2024-11-20 07:22:08.261677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.238 [2024-11-20 07:22:08.300611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.238 [2024-11-20 07:22:08.333047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:44.806 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.806 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:45.063 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VmZpFsi8I2 00:24:45.063 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:45.319 [2024-11-20 07:22:09.387399] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:45.319 TLSTESTn1 00:24:45.319 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:45.593 Running I/O for 10 seconds... 00:24:47.463 6301.00 IOPS, 24.61 MiB/s [2024-11-20T07:22:12.654Z] 6356.50 IOPS, 24.83 MiB/s [2024-11-20T07:22:13.588Z] 6535.33 IOPS, 25.53 MiB/s [2024-11-20T07:22:14.960Z] 6653.75 IOPS, 25.99 MiB/s [2024-11-20T07:22:15.562Z] 6741.40 IOPS, 26.33 MiB/s [2024-11-20T07:22:16.933Z] 6798.83 IOPS, 26.56 MiB/s [2024-11-20T07:22:17.867Z] 6848.57 IOPS, 26.75 MiB/s [2024-11-20T07:22:18.808Z] 6886.50 IOPS, 26.90 MiB/s [2024-11-20T07:22:19.818Z] 6908.89 IOPS, 26.99 MiB/s [2024-11-20T07:22:19.818Z] 6919.50 IOPS, 27.03 MiB/s 00:24:55.615 Latency(us) 00:24:55.615 [2024-11-20T07:22:19.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.615 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:55.615 Verification LBA range: start 0x0 length 0x2000 00:24:55.616 TLSTESTn1 : 10.01 6925.78 27.05 0.00 0.00 18451.64 3352.42 15930.29 00:24:55.616 [2024-11-20T07:22:19.819Z] =================================================================================================================== 00:24:55.616 [2024-11-20T07:22:19.819Z] Total : 6925.78 27.05 0.00 0.00 18451.64 3352.42 15930.29 00:24:55.616 { 00:24:55.616 "results": [ 00:24:55.616 { 00:24:55.616 "job": "TLSTESTn1", 00:24:55.616 "core_mask": "0x4", 00:24:55.616 "workload": "verify", 00:24:55.616 "status": "finished", 00:24:55.616 "verify_range": { 00:24:55.616 "start": 0, 00:24:55.616 "length": 8192 00:24:55.616 }, 00:24:55.616 "queue_depth": 128, 00:24:55.616 "io_size": 4096, 00:24:55.616 "runtime": 10.00942, 00:24:55.616 "iops": 6925.775919084223, 00:24:55.616 "mibps": 27.053812183922744, 00:24:55.616 "io_failed": 0, 00:24:55.616 "io_timeout": 0, 00:24:55.616 "avg_latency_us": 18451.6383692392, 00:24:55.616 "min_latency_us": 3352.4184615384615, 00:24:55.616 "max_latency_us": 15930.289230769231 00:24:55.616 } 00:24:55.616 ], 00:24:55.616 "core_count": 1 00:24:55.616 } 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 70264 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70264 ']' 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70264 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70264 00:24:55.616 killing process with pid 70264 00:24:55.616 Received shutdown signal, test time was about 10.000000 seconds 00:24:55.616 00:24:55.616 Latency(us) 00:24:55.616 [2024-11-20T07:22:19.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.616 [2024-11-20T07:22:19.819Z] =================================================================================================================== 00:24:55.616 [2024-11-20T07:22:19.819Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70264' 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70264 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70264 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.VmZpFsi8I2 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VmZpFsi8I2 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VmZpFsi8I2 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VmZpFsi8I2 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VmZpFsi8I2 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70402 00:24:55.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70402 /var/tmp/bdevperf.sock 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70402 ']' 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:55.616 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:55.616 [2024-11-20 07:22:19.740785] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:55.616 [2024-11-20 07:22:19.740852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70402 ] 00:24:55.875 [2024-11-20 07:22:19.868414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.875 [2024-11-20 07:22:19.900422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.875 [2024-11-20 07:22:19.929777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:56.440 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:56.440 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:56.440 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VmZpFsi8I2 00:24:56.699 [2024-11-20 07:22:20.803858] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VmZpFsi8I2': 0100666 00:24:56.699 [2024-11-20 07:22:20.803893] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:56.699 request: 00:24:56.699 { 00:24:56.699 "name": "key0", 00:24:56.699 "path": "/tmp/tmp.VmZpFsi8I2", 00:24:56.699 "method": "keyring_file_add_key", 00:24:56.699 "req_id": 1 00:24:56.699 } 00:24:56.699 Got JSON-RPC error response 00:24:56.699 response: 00:24:56.699 { 00:24:56.699 "code": -1, 00:24:56.699 "message": "Operation not permitted" 00:24:56.699 } 00:24:56.699 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:56.958 [2024-11-20 07:22:21.015990] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:56.958 [2024-11-20 07:22:21.016048] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:56.958 request: 00:24:56.958 { 00:24:56.958 "name": "TLSTEST", 00:24:56.958 "trtype": "tcp", 00:24:56.958 "traddr": "10.0.0.2", 00:24:56.958 "adrfam": "ipv4", 00:24:56.958 "trsvcid": "4420", 00:24:56.958 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.958 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.958 "prchk_reftag": false, 00:24:56.958 "prchk_guard": false, 00:24:56.958 "hdgst": false, 00:24:56.958 "ddgst": false, 00:24:56.958 "psk": "key0", 00:24:56.958 "allow_unrecognized_csi": false, 00:24:56.958 "method": "bdev_nvme_attach_controller", 00:24:56.958 "req_id": 1 00:24:56.958 } 00:24:56.958 Got JSON-RPC error response 00:24:56.958 response: 00:24:56.958 { 00:24:56.958 "code": -126, 00:24:56.958 "message": "Required key not available" 00:24:56.958 } 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70402 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70402 ']' 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70402 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70402 00:24:56.958 killing process with pid 70402 00:24:56.958 Received shutdown signal, test time was about 10.000000 seconds 00:24:56.958 00:24:56.958 Latency(us) 00:24:56.958 [2024-11-20T07:22:21.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.958 [2024-11-20T07:22:21.161Z] =================================================================================================================== 00:24:56.958 [2024-11-20T07:22:21.161Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70402' 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70402 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70402 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 70222 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70222 ']' 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70222 00:24:56.958 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70222 00:24:57.217 killing process with pid 70222 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70222' 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70222 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70222 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=70430 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 70430 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70430 ']' 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.217 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:57.217 [2024-11-20 07:22:21.320517] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:57.217 [2024-11-20 07:22:21.320573] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.476 [2024-11-20 07:22:21.454551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.476 [2024-11-20 07:22:21.485056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.476 [2024-11-20 07:22:21.485097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.476 [2024-11-20 07:22:21.485102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.476 [2024-11-20 07:22:21.485106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.476 [2024-11-20 07:22:21.485109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.476 [2024-11-20 07:22:21.485341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.476 [2024-11-20 07:22:21.513795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:58.041 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.041 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:58.041 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:58.041 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:58.041 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.041 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.041 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.VmZpFsi8I2 00:24:58.041 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:58.041 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.VmZpFsi8I2 00:24:58.042 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:24:58.042 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:58.042 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:24:58.042 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:58.042 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.VmZpFsi8I2 00:24:58.042 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VmZpFsi8I2 00:24:58.042 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:58.299 [2024-11-20 07:22:22.475695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.299 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:58.642 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:58.919 [2024-11-20 07:22:22.907774] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:58.919 [2024-11-20 07:22:22.907929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.919 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:59.177 malloc0 00:24:59.177 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:59.177 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VmZpFsi8I2 00:24:59.435 [2024-11-20 07:22:23.502045] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VmZpFsi8I2': 0100666 00:24:59.435 [2024-11-20 07:22:23.502078] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:59.435 request: 00:24:59.435 { 00:24:59.435 "name": "key0", 00:24:59.435 "path": "/tmp/tmp.VmZpFsi8I2", 00:24:59.435 "method": "keyring_file_add_key", 00:24:59.435 "req_id": 1 00:24:59.435 } 00:24:59.435 Got JSON-RPC error response 00:24:59.435 response: 00:24:59.435 { 00:24:59.435 "code": -1, 00:24:59.435 "message": "Operation not permitted" 00:24:59.435 } 00:24:59.435 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:59.694 [2024-11-20 07:22:23.714101] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:59.694 [2024-11-20 07:22:23.714151] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:59.694 request: 00:24:59.694 { 00:24:59.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.694 "host": "nqn.2016-06.io.spdk:host1", 00:24:59.694 "psk": "key0", 00:24:59.694 "method": "nvmf_subsystem_add_host", 00:24:59.694 "req_id": 1 00:24:59.694 } 00:24:59.694 Got JSON-RPC error response 00:24:59.694 response: 00:24:59.694 { 00:24:59.694 "code": -32603, 00:24:59.694 "message": "Internal error" 00:24:59.694 } 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 70430 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70430 ']' 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70430 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70430 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:59.694 killing process with pid 70430 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70430' 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70430 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70430 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.VmZpFsi8I2 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=70499 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 70499 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70499 ']' 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.694 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:59.953 [2024-11-20 07:22:23.904617] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:59.953 [2024-11-20 07:22:23.904722] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.953 [2024-11-20 07:22:24.042902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.953 [2024-11-20 07:22:24.078325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.953 [2024-11-20 07:22:24.078368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.953 [2024-11-20 07:22:24.078375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.953 [2024-11-20 07:22:24.078380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.953 [2024-11-20 07:22:24.078385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.953 [2024-11-20 07:22:24.078643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.953 [2024-11-20 07:22:24.109546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:00.886 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.886 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:00.886 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:00.886 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.886 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.886 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.886 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.VmZpFsi8I2 00:25:00.886 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VmZpFsi8I2 00:25:00.886 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:00.886 [2024-11-20 07:22:25.006355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.886 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:01.144 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:01.401 [2024-11-20 07:22:25.410412] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:01.401 [2024-11-20 07:22:25.410570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.401 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:01.716 malloc0 00:25:01.716 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:01.716 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VmZpFsi8I2 00:25:01.989 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:02.247 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:02.247 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=70549 00:25:02.247 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:02.247 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 70549 /var/tmp/bdevperf.sock 00:25:02.247 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70549 ']' 00:25:02.247 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.247 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.247 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.247 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.247 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:02.247 [2024-11-20 07:22:26.320784] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:02.247 [2024-11-20 07:22:26.321229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70549 ] 00:25:02.505 [2024-11-20 07:22:26.457586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.505 [2024-11-20 07:22:26.495162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.505 [2024-11-20 07:22:26.527455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:03.071 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.071 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:03.071 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VmZpFsi8I2 00:25:03.329 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:03.588 [2024-11-20 07:22:27.588378] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:03.588 TLSTESTn1 00:25:03.588 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:03.845 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:25:03.845 "subsystems": [ 00:25:03.845 { 00:25:03.845 "subsystem": "keyring", 00:25:03.845 "config": [ 00:25:03.845 { 00:25:03.845 "method": "keyring_file_add_key", 00:25:03.845 "params": { 00:25:03.845 "name": "key0", 00:25:03.845 "path": "/tmp/tmp.VmZpFsi8I2" 00:25:03.845 } 00:25:03.845 } 00:25:03.845 ] 00:25:03.845 }, 00:25:03.845 { 00:25:03.845 "subsystem": "iobuf", 00:25:03.845 "config": [ 00:25:03.845 { 00:25:03.845 "method": "iobuf_set_options", 00:25:03.845 "params": { 00:25:03.845 "small_pool_count": 8192, 00:25:03.845 "large_pool_count": 1024, 00:25:03.845 "small_bufsize": 8192, 00:25:03.845 "large_bufsize": 135168, 00:25:03.845 "enable_numa": false 00:25:03.845 } 00:25:03.845 } 00:25:03.845 ] 00:25:03.845 }, 00:25:03.845 { 00:25:03.845 "subsystem": "sock", 00:25:03.845 "config": [ 00:25:03.845 { 00:25:03.845 "method": "sock_set_default_impl", 00:25:03.845 "params": { 00:25:03.845 "impl_name": "uring" 00:25:03.845 } 00:25:03.845 }, 00:25:03.845 { 00:25:03.845 "method": "sock_impl_set_options", 00:25:03.845 "params": { 00:25:03.845 "impl_name": "ssl", 00:25:03.845 "recv_buf_size": 4096, 00:25:03.845 "send_buf_size": 4096, 00:25:03.845 "enable_recv_pipe": true, 00:25:03.845 "enable_quickack": false, 00:25:03.845 "enable_placement_id": 0, 00:25:03.845 "enable_zerocopy_send_server": true, 00:25:03.845 "enable_zerocopy_send_client": false, 00:25:03.845 "zerocopy_threshold": 0, 00:25:03.845 "tls_version": 0, 00:25:03.845 "enable_ktls": false 00:25:03.845 } 00:25:03.845 }, 00:25:03.845 { 00:25:03.845 "method": "sock_impl_set_options", 00:25:03.845 "params": { 00:25:03.845 "impl_name": "posix", 00:25:03.845 "recv_buf_size": 2097152, 00:25:03.845 "send_buf_size": 2097152, 00:25:03.845 "enable_recv_pipe": true, 00:25:03.845 "enable_quickack": false, 00:25:03.845 "enable_placement_id": 0, 00:25:03.845 "enable_zerocopy_send_server": true, 00:25:03.845 "enable_zerocopy_send_client": false, 00:25:03.845 "zerocopy_threshold": 0, 00:25:03.845 "tls_version": 0, 00:25:03.845 "enable_ktls": false 00:25:03.845 } 00:25:03.845 }, 00:25:03.845 { 00:25:03.845 "method": "sock_impl_set_options", 00:25:03.845 "params": { 00:25:03.845 "impl_name": "uring", 00:25:03.845 "recv_buf_size": 2097152, 00:25:03.845 "send_buf_size": 2097152, 00:25:03.845 "enable_recv_pipe": true, 00:25:03.845 "enable_quickack": false, 00:25:03.845 "enable_placement_id": 0, 00:25:03.845 "enable_zerocopy_send_server": false, 00:25:03.845 "enable_zerocopy_send_client": false, 00:25:03.845 "zerocopy_threshold": 0, 00:25:03.845 "tls_version": 0, 00:25:03.845 "enable_ktls": false 00:25:03.845 } 00:25:03.845 } 00:25:03.845 ] 00:25:03.845 }, 00:25:03.845 { 00:25:03.845 "subsystem": "vmd", 00:25:03.845 "config": [] 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "subsystem": "accel", 00:25:03.846 "config": [ 00:25:03.846 { 00:25:03.846 "method": "accel_set_options", 00:25:03.846 "params": { 00:25:03.846 "small_cache_size": 128, 00:25:03.846 "large_cache_size": 16, 00:25:03.846 "task_count": 2048, 00:25:03.846 "sequence_count": 2048, 00:25:03.846 "buf_count": 2048 00:25:03.846 } 00:25:03.846 } 00:25:03.846 ] 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "subsystem": "bdev", 00:25:03.846 "config": [ 00:25:03.846 { 00:25:03.846 "method": "bdev_set_options", 00:25:03.846 "params": { 00:25:03.846 "bdev_io_pool_size": 65535, 00:25:03.846 "bdev_io_cache_size": 256, 00:25:03.846 "bdev_auto_examine": true, 00:25:03.846 "iobuf_small_cache_size": 128, 00:25:03.846 "iobuf_large_cache_size": 16 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "bdev_raid_set_options", 00:25:03.846 "params": { 00:25:03.846 "process_window_size_kb": 1024, 00:25:03.846 "process_max_bandwidth_mb_sec": 0 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "bdev_iscsi_set_options", 00:25:03.846 "params": { 00:25:03.846 "timeout_sec": 30 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "bdev_nvme_set_options", 00:25:03.846 "params": { 00:25:03.846 "action_on_timeout": "none", 00:25:03.846 "timeout_us": 0, 00:25:03.846 "timeout_admin_us": 0, 00:25:03.846 "keep_alive_timeout_ms": 10000, 00:25:03.846 "arbitration_burst": 0, 00:25:03.846 "low_priority_weight": 0, 00:25:03.846 "medium_priority_weight": 0, 00:25:03.846 "high_priority_weight": 0, 00:25:03.846 "nvme_adminq_poll_period_us": 10000, 00:25:03.846 "nvme_ioq_poll_period_us": 0, 00:25:03.846 "io_queue_requests": 0, 00:25:03.846 "delay_cmd_submit": true, 00:25:03.846 "transport_retry_count": 4, 00:25:03.846 "bdev_retry_count": 3, 00:25:03.846 "transport_ack_timeout": 0, 00:25:03.846 "ctrlr_loss_timeout_sec": 0, 00:25:03.846 "reconnect_delay_sec": 0, 00:25:03.846 "fast_io_fail_timeout_sec": 0, 00:25:03.846 "disable_auto_failback": false, 00:25:03.846 "generate_uuids": false, 00:25:03.846 "transport_tos": 0, 00:25:03.846 "nvme_error_stat": false, 00:25:03.846 "rdma_srq_size": 0, 00:25:03.846 "io_path_stat": false, 00:25:03.846 "allow_accel_sequence": false, 00:25:03.846 "rdma_max_cq_size": 0, 00:25:03.846 "rdma_cm_event_timeout_ms": 0, 00:25:03.846 "dhchap_digests": [ 00:25:03.846 "sha256", 00:25:03.846 "sha384", 00:25:03.846 "sha512" 00:25:03.846 ], 00:25:03.846 "dhchap_dhgroups": [ 00:25:03.846 "null", 00:25:03.846 "ffdhe2048", 00:25:03.846 "ffdhe3072", 00:25:03.846 "ffdhe4096", 00:25:03.846 "ffdhe6144", 00:25:03.846 "ffdhe8192" 00:25:03.846 ] 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "bdev_nvme_set_hotplug", 00:25:03.846 "params": { 00:25:03.846 "period_us": 100000, 00:25:03.846 "enable": false 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "bdev_malloc_create", 00:25:03.846 "params": { 00:25:03.846 "name": "malloc0", 00:25:03.846 "num_blocks": 8192, 00:25:03.846 "block_size": 4096, 00:25:03.846 "physical_block_size": 4096, 00:25:03.846 "uuid": "fc21a586-3ed0-4b80-8835-a03536bb9300", 00:25:03.846 "optimal_io_boundary": 0, 00:25:03.846 "md_size": 0, 00:25:03.846 "dif_type": 0, 00:25:03.846 "dif_is_head_of_md": false, 00:25:03.846 "dif_pi_format": 0 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "bdev_wait_for_examine" 00:25:03.846 } 00:25:03.846 ] 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "subsystem": "nbd", 00:25:03.846 "config": [] 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "subsystem": "scheduler", 00:25:03.846 "config": [ 00:25:03.846 { 00:25:03.846 "method": "framework_set_scheduler", 00:25:03.846 "params": { 00:25:03.846 "name": "static" 00:25:03.846 } 00:25:03.846 } 00:25:03.846 ] 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "subsystem": "nvmf", 00:25:03.846 "config": [ 00:25:03.846 { 00:25:03.846 "method": "nvmf_set_config", 00:25:03.846 "params": { 00:25:03.846 "discovery_filter": "match_any", 00:25:03.846 "admin_cmd_passthru": { 00:25:03.846 "identify_ctrlr": false 00:25:03.846 }, 00:25:03.846 "dhchap_digests": [ 00:25:03.846 "sha256", 00:25:03.846 "sha384", 00:25:03.846 "sha512" 00:25:03.846 ], 00:25:03.846 "dhchap_dhgroups": [ 00:25:03.846 "null", 00:25:03.846 "ffdhe2048", 00:25:03.846 "ffdhe3072", 00:25:03.846 "ffdhe4096", 00:25:03.846 "ffdhe6144", 00:25:03.846 "ffdhe8192" 00:25:03.846 ] 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "nvmf_set_max_subsystems", 00:25:03.846 "params": { 00:25:03.846 "max_subsystems": 1024 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "nvmf_set_crdt", 00:25:03.846 "params": { 00:25:03.846 "crdt1": 0, 00:25:03.846 "crdt2": 0, 00:25:03.846 "crdt3": 0 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "nvmf_create_transport", 00:25:03.846 "params": { 00:25:03.846 "trtype": "TCP", 00:25:03.846 "max_queue_depth": 128, 00:25:03.846 "max_io_qpairs_per_ctrlr": 127, 00:25:03.846 "in_capsule_data_size": 4096, 00:25:03.846 "max_io_size": 131072, 00:25:03.846 "io_unit_size": 131072, 00:25:03.846 "max_aq_depth": 128, 00:25:03.846 "num_shared_buffers": 511, 00:25:03.846 "buf_cache_size": 4294967295, 00:25:03.846 "dif_insert_or_strip": false, 00:25:03.846 "zcopy": false, 00:25:03.846 "c2h_success": false, 00:25:03.846 "sock_priority": 0, 00:25:03.846 "abort_timeout_sec": 1, 00:25:03.846 "ack_timeout": 0, 00:25:03.846 "data_wr_pool_size": 0 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "nvmf_create_subsystem", 00:25:03.846 "params": { 00:25:03.846 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.846 "allow_any_host": false, 00:25:03.846 "serial_number": "SPDK00000000000001", 00:25:03.846 "model_number": "SPDK bdev Controller", 00:25:03.846 "max_namespaces": 10, 00:25:03.846 "min_cntlid": 1, 00:25:03.846 "max_cntlid": 65519, 00:25:03.846 "ana_reporting": false 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "nvmf_subsystem_add_host", 00:25:03.846 "params": { 00:25:03.846 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.846 "host": "nqn.2016-06.io.spdk:host1", 00:25:03.846 "psk": "key0" 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "nvmf_subsystem_add_ns", 00:25:03.846 "params": { 00:25:03.846 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.846 "namespace": { 00:25:03.846 "nsid": 1, 00:25:03.846 "bdev_name": "malloc0", 00:25:03.846 "nguid": "FC21A5863ED04B808835A03536BB9300", 00:25:03.846 "uuid": "fc21a586-3ed0-4b80-8835-a03536bb9300", 00:25:03.846 "no_auto_visible": false 00:25:03.846 } 00:25:03.846 } 00:25:03.846 }, 00:25:03.846 { 00:25:03.846 "method": "nvmf_subsystem_add_listener", 00:25:03.846 "params": { 00:25:03.846 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.846 "listen_address": { 00:25:03.846 "trtype": "TCP", 00:25:03.846 "adrfam": "IPv4", 00:25:03.846 "traddr": "10.0.0.2", 00:25:03.846 "trsvcid": "4420" 00:25:03.846 }, 00:25:03.846 "secure_channel": true 00:25:03.846 } 00:25:03.846 } 00:25:03.846 ] 00:25:03.846 } 00:25:03.846 ] 00:25:03.846 }' 00:25:03.846 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:04.105 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:25:04.105 "subsystems": [ 00:25:04.105 { 00:25:04.105 "subsystem": "keyring", 00:25:04.105 "config": [ 00:25:04.105 { 00:25:04.105 "method": "keyring_file_add_key", 00:25:04.105 "params": { 00:25:04.105 "name": "key0", 00:25:04.105 "path": "/tmp/tmp.VmZpFsi8I2" 00:25:04.105 } 00:25:04.105 } 00:25:04.105 ] 00:25:04.105 }, 00:25:04.105 { 00:25:04.105 "subsystem": "iobuf", 00:25:04.105 "config": [ 00:25:04.105 { 00:25:04.105 "method": "iobuf_set_options", 00:25:04.105 "params": { 00:25:04.105 "small_pool_count": 8192, 00:25:04.105 "large_pool_count": 1024, 00:25:04.105 "small_bufsize": 8192, 00:25:04.105 "large_bufsize": 135168, 00:25:04.105 "enable_numa": false 00:25:04.105 } 00:25:04.105 } 00:25:04.105 ] 00:25:04.105 }, 00:25:04.105 { 00:25:04.105 "subsystem": "sock", 00:25:04.105 "config": [ 00:25:04.105 { 00:25:04.105 "method": "sock_set_default_impl", 00:25:04.105 "params": { 00:25:04.105 "impl_name": "uring" 00:25:04.105 } 00:25:04.105 }, 00:25:04.105 { 00:25:04.105 "method": "sock_impl_set_options", 00:25:04.105 "params": { 00:25:04.105 "impl_name": "ssl", 00:25:04.105 "recv_buf_size": 4096, 00:25:04.105 "send_buf_size": 4096, 00:25:04.105 "enable_recv_pipe": true, 00:25:04.105 "enable_quickack": false, 00:25:04.105 "enable_placement_id": 0, 00:25:04.105 "enable_zerocopy_send_server": true, 00:25:04.105 "enable_zerocopy_send_client": false, 00:25:04.105 "zerocopy_threshold": 0, 00:25:04.105 "tls_version": 0, 00:25:04.105 "enable_ktls": false 00:25:04.105 } 00:25:04.105 }, 00:25:04.105 { 00:25:04.105 "method": "sock_impl_set_options", 00:25:04.105 "params": { 00:25:04.105 "impl_name": "posix", 00:25:04.105 "recv_buf_size": 2097152, 00:25:04.105 "send_buf_size": 2097152, 00:25:04.105 "enable_recv_pipe": true, 00:25:04.105 "enable_quickack": false, 00:25:04.105 "enable_placement_id": 0, 00:25:04.105 "enable_zerocopy_send_server": true, 00:25:04.105 "enable_zerocopy_send_client": false, 00:25:04.105 "zerocopy_threshold": 0, 00:25:04.105 "tls_version": 0, 00:25:04.105 "enable_ktls": false 00:25:04.105 } 00:25:04.105 }, 00:25:04.105 { 00:25:04.105 "method": "sock_impl_set_options", 00:25:04.105 "params": { 00:25:04.105 "impl_name": "uring", 00:25:04.105 "recv_buf_size": 2097152, 00:25:04.105 "send_buf_size": 2097152, 00:25:04.105 "enable_recv_pipe": true, 00:25:04.105 "enable_quickack": false, 00:25:04.105 "enable_placement_id": 0, 00:25:04.105 "enable_zerocopy_send_server": false, 00:25:04.105 "enable_zerocopy_send_client": false, 00:25:04.105 "zerocopy_threshold": 0, 00:25:04.105 "tls_version": 0, 00:25:04.105 "enable_ktls": false 00:25:04.105 } 00:25:04.105 } 00:25:04.105 ] 00:25:04.105 }, 00:25:04.105 { 00:25:04.105 "subsystem": "vmd", 00:25:04.105 "config": [] 00:25:04.105 }, 00:25:04.105 { 00:25:04.105 "subsystem": "accel", 00:25:04.105 "config": [ 00:25:04.105 { 00:25:04.105 "method": "accel_set_options", 00:25:04.105 "params": { 00:25:04.105 "small_cache_size": 128, 00:25:04.105 "large_cache_size": 16, 00:25:04.105 "task_count": 2048, 00:25:04.105 "sequence_count": 2048, 00:25:04.105 "buf_count": 2048 00:25:04.105 } 00:25:04.105 } 00:25:04.105 ] 00:25:04.105 }, 00:25:04.105 { 00:25:04.105 "subsystem": "bdev", 00:25:04.105 "config": [ 00:25:04.105 { 00:25:04.105 "method": "bdev_set_options", 00:25:04.105 "params": { 00:25:04.105 "bdev_io_pool_size": 65535, 00:25:04.105 "bdev_io_cache_size": 256, 00:25:04.105 "bdev_auto_examine": true, 00:25:04.105 "iobuf_small_cache_size": 128, 00:25:04.105 "iobuf_large_cache_size": 16 00:25:04.105 } 00:25:04.105 }, 00:25:04.105 { 00:25:04.105 "method": "bdev_raid_set_options", 00:25:04.105 "params": { 00:25:04.105 "process_window_size_kb": 1024, 00:25:04.105 "process_max_bandwidth_mb_sec": 0 00:25:04.105 } 00:25:04.105 }, 00:25:04.105 { 00:25:04.105 "method": "bdev_iscsi_set_options", 00:25:04.105 "params": { 00:25:04.105 "timeout_sec": 30 00:25:04.105 } 00:25:04.105 }, 00:25:04.105 { 00:25:04.105 "method": "bdev_nvme_set_options", 00:25:04.105 "params": { 00:25:04.105 "action_on_timeout": "none", 00:25:04.105 "timeout_us": 0, 00:25:04.105 "timeout_admin_us": 0, 00:25:04.105 "keep_alive_timeout_ms": 10000, 00:25:04.105 "arbitration_burst": 0, 00:25:04.105 "low_priority_weight": 0, 00:25:04.105 "medium_priority_weight": 0, 00:25:04.105 "high_priority_weight": 0, 00:25:04.105 "nvme_adminq_poll_period_us": 10000, 00:25:04.105 "nvme_ioq_poll_period_us": 0, 00:25:04.105 "io_queue_requests": 512, 00:25:04.105 "delay_cmd_submit": true, 00:25:04.105 "transport_retry_count": 4, 00:25:04.105 "bdev_retry_count": 3, 00:25:04.105 "transport_ack_timeout": 0, 00:25:04.105 "ctrlr_loss_timeout_sec": 0, 00:25:04.105 "reconnect_delay_sec": 0, 00:25:04.105 "fast_io_fail_timeout_sec": 0, 00:25:04.105 "disable_auto_failback": false, 00:25:04.105 "generate_uuids": false, 00:25:04.105 "transport_tos": 0, 00:25:04.105 "nvme_error_stat": false, 00:25:04.105 "rdma_srq_size": 0, 00:25:04.105 "io_path_stat": false, 00:25:04.105 "allow_accel_sequence": false, 00:25:04.105 "rdma_max_cq_size": 0, 00:25:04.105 "rdma_cm_event_timeout_ms": 0, 00:25:04.105 "dhchap_digests": [ 00:25:04.106 "sha256", 00:25:04.106 "sha384", 00:25:04.106 "sha512" 00:25:04.106 ], 00:25:04.106 "dhchap_dhgroups": [ 00:25:04.106 "null", 00:25:04.106 "ffdhe2048", 00:25:04.106 "ffdhe3072", 00:25:04.106 "ffdhe4096", 00:25:04.106 "ffdhe6144", 00:25:04.106 "ffdhe8192" 00:25:04.106 ] 00:25:04.106 } 00:25:04.106 }, 00:25:04.106 { 00:25:04.106 "method": "bdev_nvme_attach_controller", 00:25:04.106 "params": { 00:25:04.106 "name": "TLSTEST", 00:25:04.106 "trtype": "TCP", 00:25:04.106 "adrfam": "IPv4", 00:25:04.106 "traddr": "10.0.0.2", 00:25:04.106 "trsvcid": "4420", 00:25:04.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.106 "prchk_reftag": false, 00:25:04.106 "prchk_guard": false, 00:25:04.106 "ctrlr_loss_timeout_sec": 0, 00:25:04.106 "reconnect_delay_sec": 0, 00:25:04.106 "fast_io_fail_timeout_sec": 0, 00:25:04.106 "psk": "key0", 00:25:04.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:04.106 "hdgst": false, 00:25:04.106 "ddgst": false, 00:25:04.106 "multipath": "multipath" 00:25:04.106 } 00:25:04.106 }, 00:25:04.106 { 00:25:04.106 "method": "bdev_nvme_set_hotplug", 00:25:04.106 "params": { 00:25:04.106 "period_us": 100000, 00:25:04.106 "enable": false 00:25:04.106 } 00:25:04.106 }, 00:25:04.106 { 00:25:04.106 "method": "bdev_wait_for_examine" 00:25:04.106 } 00:25:04.106 ] 00:25:04.106 }, 00:25:04.106 { 00:25:04.106 "subsystem": "nbd", 00:25:04.106 "config": [] 00:25:04.106 } 00:25:04.106 ] 00:25:04.106 }' 00:25:04.106 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 70549 00:25:04.106 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70549 ']' 00:25:04.106 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70549 00:25:04.106 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:04.106 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.106 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70549 00:25:04.106 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:04.106 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:04.106 killing process with pid 70549 00:25:04.106 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70549' 00:25:04.106 Received shutdown signal, test time was about 10.000000 seconds 00:25:04.106 00:25:04.106 Latency(us) 00:25:04.106 [2024-11-20T07:22:28.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.106 [2024-11-20T07:22:28.309Z] =================================================================================================================== 00:25:04.106 [2024-11-20T07:22:28.309Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:04.106 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70549 00:25:04.106 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70549 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 70499 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70499 ']' 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70499 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70499 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70499' 00:25:04.365 killing process with pid 70499 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70499 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70499 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.365 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:25:04.365 "subsystems": [ 00:25:04.365 { 00:25:04.365 "subsystem": "keyring", 00:25:04.365 "config": [ 00:25:04.365 { 00:25:04.365 "method": "keyring_file_add_key", 00:25:04.365 "params": { 00:25:04.365 "name": "key0", 00:25:04.365 "path": "/tmp/tmp.VmZpFsi8I2" 00:25:04.365 } 00:25:04.365 } 00:25:04.365 ] 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "subsystem": "iobuf", 00:25:04.365 "config": [ 00:25:04.365 { 00:25:04.365 "method": "iobuf_set_options", 00:25:04.365 "params": { 00:25:04.365 "small_pool_count": 8192, 00:25:04.365 "large_pool_count": 1024, 00:25:04.365 "small_bufsize": 8192, 00:25:04.365 "large_bufsize": 135168, 00:25:04.365 "enable_numa": false 00:25:04.365 } 00:25:04.365 } 00:25:04.365 ] 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "subsystem": "sock", 00:25:04.365 "config": [ 00:25:04.365 { 00:25:04.365 "method": "sock_set_default_impl", 00:25:04.365 "params": { 00:25:04.365 "impl_name": "uring" 00:25:04.365 } 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "method": "sock_impl_set_options", 00:25:04.365 "params": { 00:25:04.365 "impl_name": "ssl", 00:25:04.365 "recv_buf_size": 4096, 00:25:04.365 "send_buf_size": 4096, 00:25:04.365 "enable_recv_pipe": true, 00:25:04.365 "enable_quickack": false, 00:25:04.365 "enable_placement_id": 0, 00:25:04.365 "enable_zerocopy_send_server": true, 00:25:04.365 "enable_zerocopy_send_client": false, 00:25:04.365 "zerocopy_threshold": 0, 00:25:04.365 "tls_version": 0, 00:25:04.365 "enable_ktls": false 00:25:04.365 } 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "method": "sock_impl_set_options", 00:25:04.365 "params": { 00:25:04.365 "impl_name": "posix", 00:25:04.365 "recv_buf_size": 2097152, 00:25:04.365 "send_buf_size": 2097152, 00:25:04.365 "enable_recv_pipe": true, 00:25:04.365 "enable_quickack": false, 00:25:04.365 "enable_placement_id": 0, 00:25:04.365 "enable_zerocopy_send_server": true, 00:25:04.365 "enable_zerocopy_send_client": false, 00:25:04.365 "zerocopy_threshold": 0, 00:25:04.365 "tls_version": 0, 00:25:04.365 "enable_ktls": false 00:25:04.365 } 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "method": "sock_impl_set_options", 00:25:04.365 "params": { 00:25:04.365 "impl_name": "uring", 00:25:04.365 "recv_buf_size": 2097152, 00:25:04.365 "send_buf_size": 2097152, 00:25:04.365 "enable_recv_pipe": true, 00:25:04.365 "enable_quickack": false, 00:25:04.365 "enable_placement_id": 0, 00:25:04.365 "enable_zerocopy_send_server": false, 00:25:04.365 "enable_zerocopy_send_client": false, 00:25:04.365 "zerocopy_threshold": 0, 00:25:04.365 "tls_version": 0, 00:25:04.365 "enable_ktls": false 00:25:04.365 } 00:25:04.365 } 00:25:04.365 ] 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "subsystem": "vmd", 00:25:04.365 "config": [] 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "subsystem": "accel", 00:25:04.365 "config": [ 00:25:04.365 { 00:25:04.365 "method": "accel_set_options", 00:25:04.365 "params": { 00:25:04.365 "small_cache_size": 128, 00:25:04.365 "large_cache_size": 16, 00:25:04.365 "task_count": 2048, 00:25:04.365 "sequence_count": 2048, 00:25:04.365 "buf_count": 2048 00:25:04.365 } 00:25:04.365 } 00:25:04.365 ] 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "subsystem": "bdev", 00:25:04.365 "config": [ 00:25:04.365 { 00:25:04.365 "method": "bdev_set_options", 00:25:04.365 "params": { 00:25:04.365 "bdev_io_pool_size": 65535, 00:25:04.365 "bdev_io_cache_size": 256, 00:25:04.365 "bdev_auto_examine": true, 00:25:04.365 "iobuf_small_cache_size": 128, 00:25:04.365 "iobuf_large_cache_size": 16 00:25:04.365 } 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "method": "bdev_raid_set_options", 00:25:04.365 "params": { 00:25:04.365 "process_window_size_kb": 1024, 00:25:04.365 "process_max_bandwidth_mb_sec": 0 00:25:04.365 } 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "method": "bdev_iscsi_set_options", 00:25:04.365 "params": { 00:25:04.365 "timeout_sec": 30 00:25:04.365 } 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "method": "bdev_nvme_set_options", 00:25:04.365 "params": { 00:25:04.365 "action_on_timeout": "none", 00:25:04.365 "timeout_us": 0, 00:25:04.365 "timeout_admin_us": 0, 00:25:04.365 "keep_alive_timeout_ms": 10000, 00:25:04.365 "arbitration_burst": 0, 00:25:04.365 "low_priority_weight": 0, 00:25:04.365 "medium_priority_weight": 0, 00:25:04.365 "high_priority_weight": 0, 00:25:04.365 "nvme_adminq_poll_period_us": 10000, 00:25:04.365 "nvme_ioq_poll_period_us": 0, 00:25:04.365 "io_queue_requests": 0, 00:25:04.365 "delay_cmd_submit": true, 00:25:04.365 "transport_retry_count": 4, 00:25:04.365 "bdev_retry_count": 3, 00:25:04.365 "transport_ack_timeout": 0, 00:25:04.365 "ctrlr_loss_timeout_sec": 0, 00:25:04.365 "reconnect_delay_sec": 0, 00:25:04.365 "fast_io_fail_timeout_sec": 0, 00:25:04.365 "disable_auto_failback": false, 00:25:04.365 "generate_uuids": false, 00:25:04.365 "transport_tos": 0, 00:25:04.365 "nvme_error_stat": false, 00:25:04.365 "rdma_srq_size": 0, 00:25:04.365 "io_path_stat": false, 00:25:04.365 "allow_accel_sequence": false, 00:25:04.365 "rdma_max_cq_size": 0, 00:25:04.365 "rdma_cm_event_timeout_ms": 0, 00:25:04.365 "dhchap_digests": [ 00:25:04.365 "sha256", 00:25:04.365 "sha384", 00:25:04.365 "sha512" 00:25:04.365 ], 00:25:04.365 "dhchap_dhgroups": [ 00:25:04.365 "null", 00:25:04.365 "ffdhe2048", 00:25:04.365 "ffdhe3072", 00:25:04.365 "ffdhe4096", 00:25:04.365 "ffdhe6144", 00:25:04.365 "ffdhe8192" 00:25:04.365 ] 00:25:04.365 } 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "method": "bdev_nvme_set_hotplug", 00:25:04.365 "params": { 00:25:04.365 "period_us": 100000, 00:25:04.365 "enable": false 00:25:04.365 } 00:25:04.365 }, 00:25:04.365 { 00:25:04.365 "method": "bdev_malloc_create", 00:25:04.365 "params": { 00:25:04.365 "name": "malloc0", 00:25:04.365 "num_blocks": 8192, 00:25:04.365 "block_size": 4096, 00:25:04.365 "physical_block_size": 4096, 00:25:04.366 "uuid": "fc21a586-3ed0-4b80-8835-a03536bb9300", 00:25:04.366 "optimal_io_boundary": 0, 00:25:04.366 "md_size": 0, 00:25:04.366 "dif_type": 0, 00:25:04.366 "dif_is_head_of_md": false, 00:25:04.366 "dif_pi_format": 0 00:25:04.366 } 00:25:04.366 }, 00:25:04.366 { 00:25:04.366 "method": "bdev_wait_for_examine" 00:25:04.366 } 00:25:04.366 ] 00:25:04.366 }, 00:25:04.366 { 00:25:04.366 "subsystem": "nbd", 00:25:04.366 "config": [] 00:25:04.366 }, 00:25:04.366 { 00:25:04.366 "subsystem": "scheduler", 00:25:04.366 "config": [ 00:25:04.366 { 00:25:04.366 "method": "framework_set_scheduler", 00:25:04.366 "params": { 00:25:04.366 "name": "static" 00:25:04.366 } 00:25:04.366 } 00:25:04.366 ] 00:25:04.366 }, 00:25:04.366 { 00:25:04.366 "subsystem": "nvmf", 00:25:04.366 "config": [ 00:25:04.366 { 00:25:04.366 "method": "nvmf_set_config", 00:25:04.366 "params": { 00:25:04.366 "discovery_filter": "match_any", 00:25:04.366 "admin_cmd_passthru": { 00:25:04.366 "identify_ctrlr": false 00:25:04.366 }, 00:25:04.366 "dhchap_digests": [ 00:25:04.366 "sha256", 00:25:04.366 "sha384", 00:25:04.366 "sha512" 00:25:04.366 ], 00:25:04.366 "dhchap_dhgroups": [ 00:25:04.366 "null", 00:25:04.366 "ffdhe2048", 00:25:04.366 "ffdhe3072", 00:25:04.366 "ffdhe4096", 00:25:04.366 "ffdhe6144", 00:25:04.366 "ffdhe8192" 00:25:04.366 ] 00:25:04.366 } 00:25:04.366 }, 00:25:04.366 { 00:25:04.366 "method": "nvmf_set_max_subsystems", 00:25:04.366 "params": { 00:25:04.366 "max_subsystems": 1024 00:25:04.366 } 00:25:04.366 }, 00:25:04.366 { 00:25:04.366 "method": "nvmf_set_crdt", 00:25:04.366 "params": { 00:25:04.366 "crdt1": 0, 00:25:04.366 "crdt2": 0, 00:25:04.366 "crdt3": 0 00:25:04.366 } 00:25:04.366 }, 00:25:04.366 { 00:25:04.366 "method": "nvmf_create_transport", 00:25:04.366 "params": { 00:25:04.366 "trtype": "TCP", 00:25:04.366 "max_queue_depth": 128, 00:25:04.366 "max_io_qpairs_per_ctrlr": 127, 00:25:04.366 "in_capsule_data_size": 4096, 00:25:04.366 "max_io_size": 131072, 00:25:04.366 "io_unit_size": 131072, 00:25:04.366 "max_aq_depth": 128, 00:25:04.366 "num_shared_buffers": 511, 00:25:04.366 "buf_cache_size": 4294967295, 00:25:04.366 "dif_insert_or_strip": false, 00:25:04.366 "zcopy": false, 00:25:04.366 "c2h_success": false, 00:25:04.366 "sock_priority": 0, 00:25:04.366 "abort_timeout_sec": 1, 00:25:04.366 "ack_timeout": 0, 00:25:04.366 "data_wr_pool_size": 0 00:25:04.366 } 00:25:04.366 }, 00:25:04.366 { 00:25:04.366 "method": "nvmf_create_subsystem", 00:25:04.366 "params": { 00:25:04.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.366 "allow_any_host": false, 00:25:04.366 "serial_number": "SPDK00000000000001", 00:25:04.366 "model_number": "SPDK bdev Controller", 00:25:04.366 "max_namespaces": 10, 00:25:04.366 "min_cntlid": 1, 00:25:04.366 "max_cntlid": 65519, 00:25:04.366 "ana_reporting": false 00:25:04.366 } 00:25:04.366 }, 00:25:04.366 { 00:25:04.366 "method": "nvmf_subsystem_add_host", 00:25:04.366 "params": { 00:25:04.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.366 "host": "nqn.2016-06.io.spdk:host1", 00:25:04.366 "psk": "key0" 00:25:04.366 } 00:25:04.366 }, 00:25:04.366 { 00:25:04.366 "method": "nvmf_subsystem_add_ns", 00:25:04.366 "params": { 00:25:04.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.366 "namespace": { 00:25:04.366 "nsid": 1, 00:25:04.366 "bdev_name": "malloc0", 00:25:04.366 "nguid": "FC21A5863ED04B808835A03536BB9300", 00:25:04.366 "uuid": "fc21a586-3ed0-4b80-8835-a03536bb9300", 00:25:04.366 "no_auto_visible": false 00:25:04.366 } 00:25:04.366 } 00:25:04.366 }, 00:25:04.366 { 00:25:04.366 "method": "nvmf_subsystem_add_listener", 00:25:04.366 "params": { 00:25:04.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.366 "listen_address": { 00:25:04.366 "trtype": "TCP", 00:25:04.366 "adrfam": "IPv4", 00:25:04.366 "traddr": "10.0.0.2", 00:25:04.366 "trsvcid": "4420" 00:25:04.366 }, 00:25:04.366 "secure_channel": true 00:25:04.366 } 00:25:04.366 } 00:25:04.366 ] 00:25:04.366 } 00:25:04.366 ] 00:25:04.366 }' 00:25:04.366 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=70599 00:25:04.366 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 70599 00:25:04.366 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70599 ']' 00:25:04.366 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.366 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:04.366 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.366 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.366 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.366 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.366 [2024-11-20 07:22:28.553110] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:04.366 [2024-11-20 07:22:28.553170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.624 [2024-11-20 07:22:28.690011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.624 [2024-11-20 07:22:28.724536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.624 [2024-11-20 07:22:28.724584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.624 [2024-11-20 07:22:28.724591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.624 [2024-11-20 07:22:28.724595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.624 [2024-11-20 07:22:28.724600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.624 [2024-11-20 07:22:28.724907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.945 [2024-11-20 07:22:28.869586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:04.945 [2024-11-20 07:22:28.932893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.945 [2024-11-20 07:22:28.964853] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:04.945 [2024-11-20 07:22:28.965023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=70625 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 70625 /var/tmp/bdevperf.sock 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70625 ']' 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:05.204 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:25:05.204 "subsystems": [ 00:25:05.204 { 00:25:05.204 "subsystem": "keyring", 00:25:05.204 "config": [ 00:25:05.204 { 00:25:05.204 "method": "keyring_file_add_key", 00:25:05.204 "params": { 00:25:05.204 "name": "key0", 00:25:05.204 "path": "/tmp/tmp.VmZpFsi8I2" 00:25:05.204 } 00:25:05.204 } 00:25:05.204 ] 00:25:05.204 }, 00:25:05.204 { 00:25:05.204 "subsystem": "iobuf", 00:25:05.204 "config": [ 00:25:05.204 { 00:25:05.204 "method": "iobuf_set_options", 00:25:05.204 "params": { 00:25:05.204 "small_pool_count": 8192, 00:25:05.204 "large_pool_count": 1024, 00:25:05.204 "small_bufsize": 8192, 00:25:05.204 "large_bufsize": 135168, 00:25:05.204 "enable_numa": false 00:25:05.204 } 00:25:05.204 } 00:25:05.204 ] 00:25:05.204 }, 00:25:05.204 { 00:25:05.204 "subsystem": "sock", 00:25:05.204 "config": [ 00:25:05.204 { 00:25:05.204 "method": "sock_set_default_impl", 00:25:05.204 "params": { 00:25:05.204 "impl_name": "uring" 00:25:05.204 } 00:25:05.204 }, 00:25:05.204 { 00:25:05.204 "method": "sock_impl_set_options", 00:25:05.204 "params": { 00:25:05.204 "impl_name": "ssl", 00:25:05.204 "recv_buf_size": 4096, 00:25:05.204 "send_buf_size": 4096, 00:25:05.204 "enable_recv_pipe": true, 00:25:05.204 "enable_quickack": false, 00:25:05.204 "enable_placement_id": 0, 00:25:05.204 "enable_zerocopy_send_server": true, 00:25:05.204 "enable_zerocopy_send_client": false, 00:25:05.204 "zerocopy_threshold": 0, 00:25:05.204 "tls_version": 0, 00:25:05.204 "enable_ktls": false 00:25:05.204 } 00:25:05.204 }, 00:25:05.204 { 00:25:05.204 "method": "sock_impl_set_options", 00:25:05.204 "params": { 00:25:05.204 "impl_name": "posix", 00:25:05.204 "recv_buf_size": 2097152, 00:25:05.204 "send_buf_size": 2097152, 00:25:05.204 "enable_recv_pipe": true, 00:25:05.204 "enable_quickack": false, 00:25:05.204 "enable_placement_id": 0, 00:25:05.204 "enable_zerocopy_send_server": true, 00:25:05.204 "enable_zerocopy_send_client": false, 00:25:05.204 "zerocopy_threshold": 0, 00:25:05.204 "tls_version": 0, 00:25:05.204 "enable_ktls": false 00:25:05.204 } 00:25:05.204 }, 00:25:05.204 { 00:25:05.204 "method": "sock_impl_set_options", 00:25:05.204 "params": { 00:25:05.204 "impl_name": "uring", 00:25:05.204 "recv_buf_size": 2097152, 00:25:05.204 "send_buf_size": 2097152, 00:25:05.204 "enable_recv_pipe": true, 00:25:05.204 "enable_quickack": false, 00:25:05.204 "enable_placement_id": 0, 00:25:05.204 "enable_zerocopy_send_server": false, 00:25:05.204 "enable_zerocopy_send_client": false, 00:25:05.204 "zerocopy_threshold": 0, 00:25:05.204 "tls_version": 0, 00:25:05.204 "enable_ktls": false 00:25:05.204 } 00:25:05.204 } 00:25:05.204 ] 00:25:05.204 }, 00:25:05.204 { 00:25:05.204 "subsystem": "vmd", 00:25:05.204 "config": [] 00:25:05.204 }, 00:25:05.204 { 00:25:05.204 "subsystem": "accel", 00:25:05.204 "config": [ 00:25:05.204 { 00:25:05.204 "method": "accel_set_options", 00:25:05.204 "params": { 00:25:05.204 "small_cache_size": 128, 00:25:05.204 "large_cache_size": 16, 00:25:05.204 "task_count": 2048, 00:25:05.205 "sequence_count": 2048, 00:25:05.205 "buf_count": 2048 00:25:05.205 } 00:25:05.205 } 00:25:05.205 ] 00:25:05.205 }, 00:25:05.205 { 00:25:05.205 "subsystem": "bdev", 00:25:05.205 "config": [ 00:25:05.205 { 00:25:05.205 "method": "bdev_set_options", 00:25:05.205 "params": { 00:25:05.205 "bdev_io_pool_size": 65535, 00:25:05.205 "bdev_io_cache_size": 256, 00:25:05.205 "bdev_auto_examine": true, 00:25:05.205 "iobuf_small_cache_size": 128, 00:25:05.205 "iobuf_large_cache_size": 16 00:25:05.205 } 00:25:05.205 }, 00:25:05.205 { 00:25:05.205 "method": "bdev_raid_set_options", 00:25:05.205 "params": { 00:25:05.205 "process_window_size_kb": 1024, 00:25:05.205 "process_max_bandwidth_mb_sec": 0 00:25:05.205 } 00:25:05.205 }, 00:25:05.205 { 00:25:05.205 "method": "bdev_iscsi_set_options", 00:25:05.205 "params": { 00:25:05.205 "timeout_sec": 30 00:25:05.205 } 00:25:05.205 }, 00:25:05.205 { 00:25:05.205 "method": "bdev_nvme_set_options", 00:25:05.205 "params": { 00:25:05.205 "action_on_timeout": "none", 00:25:05.205 "timeout_us": 0, 00:25:05.205 "timeout_admin_us": 0, 00:25:05.205 "keep_alive_timeout_ms": 10000, 00:25:05.205 "arbitration_burst": 0, 00:25:05.205 "low_priority_weight": 0, 00:25:05.205 "medium_priority_weight": 0, 00:25:05.205 "high_priority_weight": 0, 00:25:05.205 "nvme_adminq_poll_period_us": 10000, 00:25:05.205 "nvme_ioq_poll_period_us": 0, 00:25:05.205 "io_queue_requests": 512, 00:25:05.205 "delay_cmd_submit": true, 00:25:05.205 "transport_retry_count": 4, 00:25:05.205 "bdev_retry_count": 3, 00:25:05.205 "transport_ack_timeout": 0, 00:25:05.205 "ctrlr_loss_timeout_sec": 0, 00:25:05.205 "reconnect_delay_sec": 0, 00:25:05.205 "fast_io_fail_timeout_sec": 0, 00:25:05.205 "disable_auto_failback": false, 00:25:05.205 "generate_uuids": false, 00:25:05.205 "transport_tos": 0, 00:25:05.205 "nvme_error_stat": false, 00:25:05.205 "rdma_srq_size": 0, 00:25:05.205 "io_path_stat": false, 00:25:05.205 "allow_accel_sequence": false, 00:25:05.205 "rdma_max_cq_size": 0, 00:25:05.205 "rdma_cm_event_timeout_ms": 0, 00:25:05.205 "dhchap_digests": [ 00:25:05.205 "sha256", 00:25:05.205 "sha384", 00:25:05.205 "sha512" 00:25:05.205 ], 00:25:05.205 "dhchap_dhgroups": [ 00:25:05.205 "null", 00:25:05.205 "ffdhe2048", 00:25:05.205 "ffdhe3072", 00:25:05.205 "ffdhe4096", 00:25:05.205 "ffdhe6144", 00:25:05.205 "ffdhe8192" 00:25:05.205 ] 00:25:05.205 } 00:25:05.205 }, 00:25:05.205 { 00:25:05.205 "method": "bdev_nvme_attach_controller", 00:25:05.205 "params": { 00:25:05.205 "name": "TLSTEST", 00:25:05.205 "trtype": "TCP", 00:25:05.205 "adrfam": "IPv4", 00:25:05.205 "traddr": "10.0.0.2", 00:25:05.205 "trsvcid": "4420", 00:25:05.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.205 "prchk_reftag": false, 00:25:05.205 "prchk_guard": false, 00:25:05.205 "ctrlr_loss_timeout_sec": 0, 00:25:05.205 "reconnect_delay_sec": 0, 00:25:05.205 "fast_io_fail_timeout_sec": 0, 00:25:05.205 "psk": "key0", 00:25:05.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:05.205 "hdgst": false, 00:25:05.205 "ddgst": false, 00:25:05.205 "multipath": "multipath" 00:25:05.205 } 00:25:05.205 }, 00:25:05.205 { 00:25:05.205 "method": "bdev_nvme_set_hotplug", 00:25:05.205 "params": { 00:25:05.205 "period_us": 100000, 00:25:05.205 "enable": false 00:25:05.205 } 00:25:05.205 }, 00:25:05.205 { 00:25:05.205 "method": "bdev_wait_for_examine" 00:25:05.205 } 00:25:05.205 ] 00:25:05.205 }, 00:25:05.205 { 00:25:05.205 "subsystem": "nbd", 00:25:05.205 "config": [] 00:25:05.205 } 00:25:05.205 ] 00:25:05.205 }' 00:25:05.463 [2024-11-20 07:22:29.411844] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:05.463 [2024-11-20 07:22:29.411915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70625 ] 00:25:05.463 [2024-11-20 07:22:29.546741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.463 [2024-11-20 07:22:29.579117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.722 [2024-11-20 07:22:29.689853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:05.722 [2024-11-20 07:22:29.725448] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:06.286 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.286 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:06.286 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:06.286 Running I/O for 10 seconds... 00:25:08.222 6745.00 IOPS, 26.35 MiB/s [2024-11-20T07:22:33.357Z] 6811.00 IOPS, 26.61 MiB/s [2024-11-20T07:22:34.732Z] 6848.00 IOPS, 26.75 MiB/s [2024-11-20T07:22:35.680Z] 6871.75 IOPS, 26.84 MiB/s [2024-11-20T07:22:36.613Z] 6891.80 IOPS, 26.92 MiB/s [2024-11-20T07:22:37.545Z] 6883.33 IOPS, 26.89 MiB/s [2024-11-20T07:22:38.509Z] 6823.00 IOPS, 26.65 MiB/s [2024-11-20T07:22:39.450Z] 6826.50 IOPS, 26.67 MiB/s [2024-11-20T07:22:40.406Z] 6819.22 IOPS, 26.64 MiB/s [2024-11-20T07:22:40.406Z] 6830.80 IOPS, 26.68 MiB/s 00:25:16.203 Latency(us) 00:25:16.203 [2024-11-20T07:22:40.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.204 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:16.204 Verification LBA range: start 0x0 length 0x2000 00:25:16.204 TLSTESTn1 : 10.01 6836.59 26.71 0.00 0.00 18693.74 3654.89 18148.43 00:25:16.204 [2024-11-20T07:22:40.407Z] =================================================================================================================== 00:25:16.204 [2024-11-20T07:22:40.407Z] Total : 6836.59 26.71 0.00 0.00 18693.74 3654.89 18148.43 00:25:16.204 { 00:25:16.204 "results": [ 00:25:16.204 { 00:25:16.204 "job": "TLSTESTn1", 00:25:16.204 "core_mask": "0x4", 00:25:16.204 "workload": "verify", 00:25:16.204 "status": "finished", 00:25:16.204 "verify_range": { 00:25:16.204 "start": 0, 00:25:16.204 "length": 8192 00:25:16.204 }, 00:25:16.204 "queue_depth": 128, 00:25:16.204 "io_size": 4096, 00:25:16.204 "runtime": 10.009817, 00:25:16.204 "iops": 6836.588521048886, 00:25:16.204 "mibps": 26.70542391034721, 00:25:16.204 "io_failed": 0, 00:25:16.204 "io_timeout": 0, 00:25:16.204 "avg_latency_us": 18693.739035935203, 00:25:16.204 "min_latency_us": 3654.892307692308, 00:25:16.204 "max_latency_us": 18148.43076923077 00:25:16.204 } 00:25:16.204 ], 00:25:16.204 "core_count": 1 00:25:16.204 } 00:25:16.204 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.204 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 70625 00:25:16.204 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70625 ']' 00:25:16.204 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70625 00:25:16.204 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:16.204 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.204 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70625 00:25:16.204 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:16.204 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:16.204 killing process with pid 70625 00:25:16.204 Received shutdown signal, test time was about 10.000000 seconds 00:25:16.204 00:25:16.204 Latency(us) 00:25:16.204 [2024-11-20T07:22:40.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.204 [2024-11-20T07:22:40.407Z] =================================================================================================================== 00:25:16.204 [2024-11-20T07:22:40.407Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.204 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70625' 00:25:16.204 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70625 00:25:16.204 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70625 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 70599 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70599 ']' 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70599 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70599 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:16.462 killing process with pid 70599 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70599' 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70599 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70599 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=70758 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 70758 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70758 ']' 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.462 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.719 [2024-11-20 07:22:40.664713] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:16.719 [2024-11-20 07:22:40.664783] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.719 [2024-11-20 07:22:40.796954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.719 [2024-11-20 07:22:40.831436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.719 [2024-11-20 07:22:40.831481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.719 [2024-11-20 07:22:40.831488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.719 [2024-11-20 07:22:40.831492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.719 [2024-11-20 07:22:40.831497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.719 [2024-11-20 07:22:40.831766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.719 [2024-11-20 07:22:40.862383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:16.719 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.719 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:16.719 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:16.719 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:16.719 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:17.014 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.014 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.VmZpFsi8I2 00:25:17.014 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VmZpFsi8I2 00:25:17.014 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:17.014 [2024-11-20 07:22:41.098586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.014 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:17.272 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:17.272 [2024-11-20 07:22:41.466646] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:17.272 [2024-11-20 07:22:41.466818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.531 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:17.531 malloc0 00:25:17.531 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:17.789 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VmZpFsi8I2 00:25:18.047 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:18.047 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=70806 00:25:18.047 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:18.047 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:18.047 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 70806 /var/tmp/bdevperf.sock 00:25:18.047 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70806 ']' 00:25:18.047 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:18.047 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:18.047 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:18.047 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.047 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.047 [2024-11-20 07:22:42.211913] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:18.047 [2024-11-20 07:22:42.211981] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70806 ] 00:25:18.305 [2024-11-20 07:22:42.349578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.305 [2024-11-20 07:22:42.380625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.305 [2024-11-20 07:22:42.409188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:18.872 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.872 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:18.872 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VmZpFsi8I2 00:25:19.130 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:19.388 [2024-11-20 07:22:43.347693] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:19.388 nvme0n1 00:25:19.388 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:19.388 Running I/O for 1 seconds... 00:25:20.581 6368.00 IOPS, 24.88 MiB/s 00:25:20.581 Latency(us) 00:25:20.581 [2024-11-20T07:22:44.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.581 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:20.581 Verification LBA range: start 0x0 length 0x2000 00:25:20.581 nvme0n1 : 1.01 6434.41 25.13 0.00 0.00 19774.88 3554.07 13611.32 00:25:20.581 [2024-11-20T07:22:44.784Z] =================================================================================================================== 00:25:20.581 [2024-11-20T07:22:44.784Z] Total : 6434.41 25.13 0.00 0.00 19774.88 3554.07 13611.32 00:25:20.581 { 00:25:20.582 "results": [ 00:25:20.582 { 00:25:20.582 "job": "nvme0n1", 00:25:20.582 "core_mask": "0x2", 00:25:20.582 "workload": "verify", 00:25:20.582 "status": "finished", 00:25:20.582 "verify_range": { 00:25:20.582 "start": 0, 00:25:20.582 "length": 8192 00:25:20.582 }, 00:25:20.582 "queue_depth": 128, 00:25:20.582 "io_size": 4096, 00:25:20.582 "runtime": 1.009728, 00:25:20.582 "iops": 6434.406097483678, 00:25:20.582 "mibps": 25.13439881829562, 00:25:20.582 "io_failed": 0, 00:25:20.582 "io_timeout": 0, 00:25:20.582 "avg_latency_us": 19774.877535430554, 00:25:20.582 "min_latency_us": 3554.067692307692, 00:25:20.582 "max_latency_us": 13611.323076923078 00:25:20.582 } 00:25:20.582 ], 00:25:20.582 "core_count": 1 00:25:20.582 } 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 70806 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70806 ']' 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70806 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70806 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:20.582 killing process with pid 70806 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70806' 00:25:20.582 Received shutdown signal, test time was about 1.000000 seconds 00:25:20.582 00:25:20.582 Latency(us) 00:25:20.582 [2024-11-20T07:22:44.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.582 [2024-11-20T07:22:44.785Z] =================================================================================================================== 00:25:20.582 [2024-11-20T07:22:44.785Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70806 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70806 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 70758 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70758 ']' 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70758 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70758 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:20.582 killing process with pid 70758 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70758' 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70758 00:25:20.582 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70758 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=70852 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 70852 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70852 ']' 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.840 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.840 [2024-11-20 07:22:44.840806] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:20.840 [2024-11-20 07:22:44.840870] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.840 [2024-11-20 07:22:44.978002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.840 [2024-11-20 07:22:45.007543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.840 [2024-11-20 07:22:45.007581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.840 [2024-11-20 07:22:45.007587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.840 [2024-11-20 07:22:45.007591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.840 [2024-11-20 07:22:45.007596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.840 [2024-11-20 07:22:45.007809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.840 [2024-11-20 07:22:45.035815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:21.772 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.772 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:21.772 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:21.772 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.772 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.772 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.772 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:25:21.772 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.772 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.772 [2024-11-20 07:22:45.704801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.772 malloc0 00:25:21.772 [2024-11-20 07:22:45.730555] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:21.772 [2024-11-20 07:22:45.730679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.772 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.772 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=70878 00:25:21.772 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 70878 /var/tmp/bdevperf.sock 00:25:21.773 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70878 ']' 00:25:21.773 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:21.773 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.773 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.773 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.773 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.773 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.773 [2024-11-20 07:22:45.791653] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:21.773 [2024-11-20 07:22:45.791699] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70878 ] 00:25:21.773 [2024-11-20 07:22:45.926289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.773 [2024-11-20 07:22:45.957274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.030 [2024-11-20 07:22:45.986268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:22.596 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.596 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:22.596 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VmZpFsi8I2 00:25:22.855 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:23.113 [2024-11-20 07:22:47.072572] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:23.113 nvme0n1 00:25:23.113 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:23.113 Running I/O for 1 seconds... 00:25:24.085 7073.00 IOPS, 27.63 MiB/s 00:25:24.085 Latency(us) 00:25:24.085 [2024-11-20T07:22:48.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.085 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:24.085 Verification LBA range: start 0x0 length 0x2000 00:25:24.085 nvme0n1 : 1.01 7136.18 27.88 0.00 0.00 17831.83 3327.21 14317.10 00:25:24.085 [2024-11-20T07:22:48.288Z] =================================================================================================================== 00:25:24.085 [2024-11-20T07:22:48.288Z] Total : 7136.18 27.88 0.00 0.00 17831.83 3327.21 14317.10 00:25:24.085 { 00:25:24.085 "results": [ 00:25:24.085 { 00:25:24.085 "job": "nvme0n1", 00:25:24.085 "core_mask": "0x2", 00:25:24.085 "workload": "verify", 00:25:24.085 "status": "finished", 00:25:24.085 "verify_range": { 00:25:24.085 "start": 0, 00:25:24.085 "length": 8192 00:25:24.085 }, 00:25:24.085 "queue_depth": 128, 00:25:24.085 "io_size": 4096, 00:25:24.085 "runtime": 1.009224, 00:25:24.085 "iops": 7136.175913375028, 00:25:24.085 "mibps": 27.875687161621205, 00:25:24.085 "io_failed": 0, 00:25:24.085 "io_timeout": 0, 00:25:24.085 "avg_latency_us": 17831.831397261445, 00:25:24.085 "min_latency_us": 3327.2123076923076, 00:25:24.085 "max_latency_us": 14317.095384615384 00:25:24.085 } 00:25:24.085 ], 00:25:24.085 "core_count": 1 00:25:24.085 } 00:25:24.085 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:24.085 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.085 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:24.343 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.343 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:24.343 "subsystems": [ 00:25:24.343 { 00:25:24.343 "subsystem": "keyring", 00:25:24.343 "config": [ 00:25:24.343 { 00:25:24.343 "method": "keyring_file_add_key", 00:25:24.343 "params": { 00:25:24.343 "name": "key0", 00:25:24.343 "path": "/tmp/tmp.VmZpFsi8I2" 00:25:24.343 } 00:25:24.343 } 00:25:24.343 ] 00:25:24.343 }, 00:25:24.343 { 00:25:24.343 "subsystem": "iobuf", 00:25:24.343 "config": [ 00:25:24.343 { 00:25:24.343 "method": "iobuf_set_options", 00:25:24.343 "params": { 00:25:24.343 "small_pool_count": 8192, 00:25:24.343 "large_pool_count": 1024, 00:25:24.343 "small_bufsize": 8192, 00:25:24.343 "large_bufsize": 135168, 00:25:24.343 "enable_numa": false 00:25:24.343 } 00:25:24.343 } 00:25:24.343 ] 00:25:24.343 }, 00:25:24.343 { 00:25:24.343 "subsystem": "sock", 00:25:24.343 "config": [ 00:25:24.343 { 00:25:24.343 "method": "sock_set_default_impl", 00:25:24.343 "params": { 00:25:24.343 "impl_name": "uring" 00:25:24.343 } 00:25:24.343 }, 00:25:24.343 { 00:25:24.343 "method": "sock_impl_set_options", 00:25:24.343 "params": { 00:25:24.343 "impl_name": "ssl", 00:25:24.343 "recv_buf_size": 4096, 00:25:24.343 "send_buf_size": 4096, 00:25:24.343 "enable_recv_pipe": true, 00:25:24.343 "enable_quickack": false, 00:25:24.343 "enable_placement_id": 0, 00:25:24.343 "enable_zerocopy_send_server": true, 00:25:24.343 "enable_zerocopy_send_client": false, 00:25:24.343 "zerocopy_threshold": 0, 00:25:24.343 "tls_version": 0, 00:25:24.343 "enable_ktls": false 00:25:24.343 } 00:25:24.343 }, 00:25:24.343 { 00:25:24.343 "method": "sock_impl_set_options", 00:25:24.343 "params": { 00:25:24.343 "impl_name": "posix", 00:25:24.343 "recv_buf_size": 2097152, 00:25:24.343 "send_buf_size": 2097152, 00:25:24.343 "enable_recv_pipe": true, 00:25:24.343 "enable_quickack": false, 00:25:24.343 "enable_placement_id": 0, 00:25:24.343 "enable_zerocopy_send_server": true, 00:25:24.343 "enable_zerocopy_send_client": false, 00:25:24.343 "zerocopy_threshold": 0, 00:25:24.343 "tls_version": 0, 00:25:24.343 "enable_ktls": false 00:25:24.343 } 00:25:24.343 }, 00:25:24.343 { 00:25:24.343 "method": "sock_impl_set_options", 00:25:24.343 "params": { 00:25:24.343 "impl_name": "uring", 00:25:24.343 "recv_buf_size": 2097152, 00:25:24.343 "send_buf_size": 2097152, 00:25:24.343 "enable_recv_pipe": true, 00:25:24.343 "enable_quickack": false, 00:25:24.343 "enable_placement_id": 0, 00:25:24.343 "enable_zerocopy_send_server": false, 00:25:24.343 "enable_zerocopy_send_client": false, 00:25:24.343 "zerocopy_threshold": 0, 00:25:24.343 "tls_version": 0, 00:25:24.343 "enable_ktls": false 00:25:24.343 } 00:25:24.343 } 00:25:24.343 ] 00:25:24.343 }, 00:25:24.343 { 00:25:24.343 "subsystem": "vmd", 00:25:24.343 "config": [] 00:25:24.343 }, 00:25:24.343 { 00:25:24.343 "subsystem": "accel", 00:25:24.343 "config": [ 00:25:24.343 { 00:25:24.343 "method": "accel_set_options", 00:25:24.343 "params": { 00:25:24.343 "small_cache_size": 128, 00:25:24.343 "large_cache_size": 16, 00:25:24.343 "task_count": 2048, 00:25:24.343 "sequence_count": 2048, 00:25:24.343 "buf_count": 2048 00:25:24.343 } 00:25:24.343 } 00:25:24.343 ] 00:25:24.343 }, 00:25:24.343 { 00:25:24.343 "subsystem": "bdev", 00:25:24.343 "config": [ 00:25:24.343 { 00:25:24.343 "method": "bdev_set_options", 00:25:24.343 "params": { 00:25:24.343 "bdev_io_pool_size": 65535, 00:25:24.343 "bdev_io_cache_size": 256, 00:25:24.343 "bdev_auto_examine": true, 00:25:24.343 "iobuf_small_cache_size": 128, 00:25:24.343 "iobuf_large_cache_size": 16 00:25:24.343 } 00:25:24.343 }, 00:25:24.343 { 00:25:24.343 "method": "bdev_raid_set_options", 00:25:24.343 "params": { 00:25:24.343 "process_window_size_kb": 1024, 00:25:24.343 "process_max_bandwidth_mb_sec": 0 00:25:24.343 } 00:25:24.343 }, 00:25:24.343 { 00:25:24.343 "method": "bdev_iscsi_set_options", 00:25:24.343 "params": { 00:25:24.343 "timeout_sec": 30 00:25:24.343 } 00:25:24.343 }, 00:25:24.343 { 00:25:24.343 "method": "bdev_nvme_set_options", 00:25:24.343 "params": { 00:25:24.343 "action_on_timeout": "none", 00:25:24.343 "timeout_us": 0, 00:25:24.343 "timeout_admin_us": 0, 00:25:24.343 "keep_alive_timeout_ms": 10000, 00:25:24.343 "arbitration_burst": 0, 00:25:24.344 "low_priority_weight": 0, 00:25:24.344 "medium_priority_weight": 0, 00:25:24.344 "high_priority_weight": 0, 00:25:24.344 "nvme_adminq_poll_period_us": 10000, 00:25:24.344 "nvme_ioq_poll_period_us": 0, 00:25:24.344 "io_queue_requests": 0, 00:25:24.344 "delay_cmd_submit": true, 00:25:24.344 "transport_retry_count": 4, 00:25:24.344 "bdev_retry_count": 3, 00:25:24.344 "transport_ack_timeout": 0, 00:25:24.344 "ctrlr_loss_timeout_sec": 0, 00:25:24.344 "reconnect_delay_sec": 0, 00:25:24.344 "fast_io_fail_timeout_sec": 0, 00:25:24.344 "disable_auto_failback": false, 00:25:24.344 "generate_uuids": false, 00:25:24.344 "transport_tos": 0, 00:25:24.344 "nvme_error_stat": false, 00:25:24.344 "rdma_srq_size": 0, 00:25:24.344 "io_path_stat": false, 00:25:24.344 "allow_accel_sequence": false, 00:25:24.344 "rdma_max_cq_size": 0, 00:25:24.344 "rdma_cm_event_timeout_ms": 0, 00:25:24.344 "dhchap_digests": [ 00:25:24.344 "sha256", 00:25:24.344 "sha384", 00:25:24.344 "sha512" 00:25:24.344 ], 00:25:24.344 "dhchap_dhgroups": [ 00:25:24.344 "null", 00:25:24.344 "ffdhe2048", 00:25:24.344 "ffdhe3072", 00:25:24.344 "ffdhe4096", 00:25:24.344 "ffdhe6144", 00:25:24.344 "ffdhe8192" 00:25:24.344 ] 00:25:24.344 } 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "method": "bdev_nvme_set_hotplug", 00:25:24.344 "params": { 00:25:24.344 "period_us": 100000, 00:25:24.344 "enable": false 00:25:24.344 } 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "method": "bdev_malloc_create", 00:25:24.344 "params": { 00:25:24.344 "name": "malloc0", 00:25:24.344 "num_blocks": 8192, 00:25:24.344 "block_size": 4096, 00:25:24.344 "physical_block_size": 4096, 00:25:24.344 "uuid": "7b5e63d1-07de-4d9f-8d9c-88bcf44c24e9", 00:25:24.344 "optimal_io_boundary": 0, 00:25:24.344 "md_size": 0, 00:25:24.344 "dif_type": 0, 00:25:24.344 "dif_is_head_of_md": false, 00:25:24.344 "dif_pi_format": 0 00:25:24.344 } 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "method": "bdev_wait_for_examine" 00:25:24.344 } 00:25:24.344 ] 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "subsystem": "nbd", 00:25:24.344 "config": [] 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "subsystem": "scheduler", 00:25:24.344 "config": [ 00:25:24.344 { 00:25:24.344 "method": "framework_set_scheduler", 00:25:24.344 "params": { 00:25:24.344 "name": "static" 00:25:24.344 } 00:25:24.344 } 00:25:24.344 ] 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "subsystem": "nvmf", 00:25:24.344 "config": [ 00:25:24.344 { 00:25:24.344 "method": "nvmf_set_config", 00:25:24.344 "params": { 00:25:24.344 "discovery_filter": "match_any", 00:25:24.344 "admin_cmd_passthru": { 00:25:24.344 "identify_ctrlr": false 00:25:24.344 }, 00:25:24.344 "dhchap_digests": [ 00:25:24.344 "sha256", 00:25:24.344 "sha384", 00:25:24.344 "sha512" 00:25:24.344 ], 00:25:24.344 "dhchap_dhgroups": [ 00:25:24.344 "null", 00:25:24.344 "ffdhe2048", 00:25:24.344 "ffdhe3072", 00:25:24.344 "ffdhe4096", 00:25:24.344 "ffdhe6144", 00:25:24.344 "ffdhe8192" 00:25:24.344 ] 00:25:24.344 } 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "method": "nvmf_set_max_subsystems", 00:25:24.344 "params": { 00:25:24.344 "max_subsystems": 1024 00:25:24.344 } 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "method": "nvmf_set_crdt", 00:25:24.344 "params": { 00:25:24.344 "crdt1": 0, 00:25:24.344 "crdt2": 0, 00:25:24.344 "crdt3": 0 00:25:24.344 } 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "method": "nvmf_create_transport", 00:25:24.344 "params": { 00:25:24.344 "trtype": "TCP", 00:25:24.344 "max_queue_depth": 128, 00:25:24.344 "max_io_qpairs_per_ctrlr": 127, 00:25:24.344 "in_capsule_data_size": 4096, 00:25:24.344 "max_io_size": 131072, 00:25:24.344 "io_unit_size": 131072, 00:25:24.344 "max_aq_depth": 128, 00:25:24.344 "num_shared_buffers": 511, 00:25:24.344 "buf_cache_size": 4294967295, 00:25:24.344 "dif_insert_or_strip": false, 00:25:24.344 "zcopy": false, 00:25:24.344 "c2h_success": false, 00:25:24.344 "sock_priority": 0, 00:25:24.344 "abort_timeout_sec": 1, 00:25:24.344 "ack_timeout": 0, 00:25:24.344 "data_wr_pool_size": 0 00:25:24.344 } 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "method": "nvmf_create_subsystem", 00:25:24.344 "params": { 00:25:24.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.344 "allow_any_host": false, 00:25:24.344 "serial_number": "00000000000000000000", 00:25:24.344 "model_number": "SPDK bdev Controller", 00:25:24.344 "max_namespaces": 32, 00:25:24.344 "min_cntlid": 1, 00:25:24.344 "max_cntlid": 65519, 00:25:24.344 "ana_reporting": false 00:25:24.344 } 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "method": "nvmf_subsystem_add_host", 00:25:24.344 "params": { 00:25:24.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.344 "host": "nqn.2016-06.io.spdk:host1", 00:25:24.344 "psk": "key0" 00:25:24.344 } 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "method": "nvmf_subsystem_add_ns", 00:25:24.344 "params": { 00:25:24.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.344 "namespace": { 00:25:24.344 "nsid": 1, 00:25:24.344 "bdev_name": "malloc0", 00:25:24.344 "nguid": "7B5E63D107DE4D9F8D9C88BCF44C24E9", 00:25:24.344 "uuid": "7b5e63d1-07de-4d9f-8d9c-88bcf44c24e9", 00:25:24.344 "no_auto_visible": false 00:25:24.344 } 00:25:24.344 } 00:25:24.344 }, 00:25:24.344 { 00:25:24.344 "method": "nvmf_subsystem_add_listener", 00:25:24.344 "params": { 00:25:24.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.344 "listen_address": { 00:25:24.344 "trtype": "TCP", 00:25:24.344 "adrfam": "IPv4", 00:25:24.344 "traddr": "10.0.0.2", 00:25:24.344 "trsvcid": "4420" 00:25:24.344 }, 00:25:24.344 "secure_channel": false, 00:25:24.344 "sock_impl": "ssl" 00:25:24.344 } 00:25:24.344 } 00:25:24.344 ] 00:25:24.344 } 00:25:24.344 ] 00:25:24.344 }' 00:25:24.344 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:24.602 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:24.602 "subsystems": [ 00:25:24.602 { 00:25:24.602 "subsystem": "keyring", 00:25:24.602 "config": [ 00:25:24.602 { 00:25:24.602 "method": "keyring_file_add_key", 00:25:24.602 "params": { 00:25:24.602 "name": "key0", 00:25:24.602 "path": "/tmp/tmp.VmZpFsi8I2" 00:25:24.602 } 00:25:24.602 } 00:25:24.602 ] 00:25:24.602 }, 00:25:24.602 { 00:25:24.602 "subsystem": "iobuf", 00:25:24.602 "config": [ 00:25:24.602 { 00:25:24.602 "method": "iobuf_set_options", 00:25:24.602 "params": { 00:25:24.602 "small_pool_count": 8192, 00:25:24.602 "large_pool_count": 1024, 00:25:24.602 "small_bufsize": 8192, 00:25:24.602 "large_bufsize": 135168, 00:25:24.602 "enable_numa": false 00:25:24.602 } 00:25:24.602 } 00:25:24.602 ] 00:25:24.602 }, 00:25:24.602 { 00:25:24.602 "subsystem": "sock", 00:25:24.602 "config": [ 00:25:24.602 { 00:25:24.602 "method": "sock_set_default_impl", 00:25:24.602 "params": { 00:25:24.602 "impl_name": "uring" 00:25:24.602 } 00:25:24.602 }, 00:25:24.602 { 00:25:24.602 "method": "sock_impl_set_options", 00:25:24.602 "params": { 00:25:24.602 "impl_name": "ssl", 00:25:24.602 "recv_buf_size": 4096, 00:25:24.602 "send_buf_size": 4096, 00:25:24.602 "enable_recv_pipe": true, 00:25:24.602 "enable_quickack": false, 00:25:24.602 "enable_placement_id": 0, 00:25:24.602 "enable_zerocopy_send_server": true, 00:25:24.602 "enable_zerocopy_send_client": false, 00:25:24.602 "zerocopy_threshold": 0, 00:25:24.602 "tls_version": 0, 00:25:24.602 "enable_ktls": false 00:25:24.602 } 00:25:24.602 }, 00:25:24.602 { 00:25:24.602 "method": "sock_impl_set_options", 00:25:24.602 "params": { 00:25:24.602 "impl_name": "posix", 00:25:24.602 "recv_buf_size": 2097152, 00:25:24.602 "send_buf_size": 2097152, 00:25:24.602 "enable_recv_pipe": true, 00:25:24.602 "enable_quickack": false, 00:25:24.602 "enable_placement_id": 0, 00:25:24.602 "enable_zerocopy_send_server": true, 00:25:24.602 "enable_zerocopy_send_client": false, 00:25:24.602 "zerocopy_threshold": 0, 00:25:24.602 "tls_version": 0, 00:25:24.602 "enable_ktls": false 00:25:24.602 } 00:25:24.602 }, 00:25:24.602 { 00:25:24.602 "method": "sock_impl_set_options", 00:25:24.602 "params": { 00:25:24.602 "impl_name": "uring", 00:25:24.602 "recv_buf_size": 2097152, 00:25:24.602 "send_buf_size": 2097152, 00:25:24.602 "enable_recv_pipe": true, 00:25:24.602 "enable_quickack": false, 00:25:24.603 "enable_placement_id": 0, 00:25:24.603 "enable_zerocopy_send_server": false, 00:25:24.603 "enable_zerocopy_send_client": false, 00:25:24.603 "zerocopy_threshold": 0, 00:25:24.603 "tls_version": 0, 00:25:24.603 "enable_ktls": false 00:25:24.603 } 00:25:24.603 } 00:25:24.603 ] 00:25:24.603 }, 00:25:24.603 { 00:25:24.603 "subsystem": "vmd", 00:25:24.603 "config": [] 00:25:24.603 }, 00:25:24.603 { 00:25:24.603 "subsystem": "accel", 00:25:24.603 "config": [ 00:25:24.603 { 00:25:24.603 "method": "accel_set_options", 00:25:24.603 "params": { 00:25:24.603 "small_cache_size": 128, 00:25:24.603 "large_cache_size": 16, 00:25:24.603 "task_count": 2048, 00:25:24.603 "sequence_count": 2048, 00:25:24.603 "buf_count": 2048 00:25:24.603 } 00:25:24.603 } 00:25:24.603 ] 00:25:24.603 }, 00:25:24.603 { 00:25:24.603 "subsystem": "bdev", 00:25:24.603 "config": [ 00:25:24.603 { 00:25:24.603 "method": "bdev_set_options", 00:25:24.603 "params": { 00:25:24.603 "bdev_io_pool_size": 65535, 00:25:24.603 "bdev_io_cache_size": 256, 00:25:24.603 "bdev_auto_examine": true, 00:25:24.603 "iobuf_small_cache_size": 128, 00:25:24.603 "iobuf_large_cache_size": 16 00:25:24.603 } 00:25:24.603 }, 00:25:24.603 { 00:25:24.603 "method": "bdev_raid_set_options", 00:25:24.603 "params": { 00:25:24.603 "process_window_size_kb": 1024, 00:25:24.603 "process_max_bandwidth_mb_sec": 0 00:25:24.603 } 00:25:24.603 }, 00:25:24.603 { 00:25:24.603 "method": "bdev_iscsi_set_options", 00:25:24.603 "params": { 00:25:24.603 "timeout_sec": 30 00:25:24.603 } 00:25:24.603 }, 00:25:24.603 { 00:25:24.603 "method": "bdev_nvme_set_options", 00:25:24.603 "params": { 00:25:24.603 "action_on_timeout": "none", 00:25:24.603 "timeout_us": 0, 00:25:24.603 "timeout_admin_us": 0, 00:25:24.603 "keep_alive_timeout_ms": 10000, 00:25:24.603 "arbitration_burst": 0, 00:25:24.603 "low_priority_weight": 0, 00:25:24.603 "medium_priority_weight": 0, 00:25:24.603 "high_priority_weight": 0, 00:25:24.603 "nvme_adminq_poll_period_us": 10000, 00:25:24.603 "nvme_ioq_poll_period_us": 0, 00:25:24.603 "io_queue_requests": 512, 00:25:24.603 "delay_cmd_submit": true, 00:25:24.603 "transport_retry_count": 4, 00:25:24.603 "bdev_retry_count": 3, 00:25:24.603 "transport_ack_timeout": 0, 00:25:24.603 "ctrlr_loss_timeout_sec": 0, 00:25:24.603 "reconnect_delay_sec": 0, 00:25:24.603 "fast_io_fail_timeout_sec": 0, 00:25:24.603 "disable_auto_failback": false, 00:25:24.603 "generate_uuids": false, 00:25:24.603 "transport_tos": 0, 00:25:24.603 "nvme_error_stat": false, 00:25:24.603 "rdma_srq_size": 0, 00:25:24.603 "io_path_stat": false, 00:25:24.603 "allow_accel_sequence": false, 00:25:24.603 "rdma_max_cq_size": 0, 00:25:24.603 "rdma_cm_event_timeout_ms": 0, 00:25:24.603 "dhchap_digests": [ 00:25:24.603 "sha256", 00:25:24.603 "sha384", 00:25:24.603 "sha512" 00:25:24.603 ], 00:25:24.603 "dhchap_dhgroups": [ 00:25:24.603 "null", 00:25:24.603 "ffdhe2048", 00:25:24.603 "ffdhe3072", 00:25:24.603 "ffdhe4096", 00:25:24.603 "ffdhe6144", 00:25:24.603 "ffdhe8192" 00:25:24.603 ] 00:25:24.603 } 00:25:24.603 }, 00:25:24.603 { 00:25:24.603 "method": "bdev_nvme_attach_controller", 00:25:24.603 "params": { 00:25:24.603 "name": "nvme0", 00:25:24.603 "trtype": "TCP", 00:25:24.603 "adrfam": "IPv4", 00:25:24.603 "traddr": "10.0.0.2", 00:25:24.603 "trsvcid": "4420", 00:25:24.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.603 "prchk_reftag": false, 00:25:24.603 "prchk_guard": false, 00:25:24.603 "ctrlr_loss_timeout_sec": 0, 00:25:24.603 "reconnect_delay_sec": 0, 00:25:24.603 "fast_io_fail_timeout_sec": 0, 00:25:24.603 "psk": "key0", 00:25:24.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.603 "hdgst": false, 00:25:24.603 "ddgst": false, 00:25:24.603 "multipath": "multipath" 00:25:24.603 } 00:25:24.603 }, 00:25:24.603 { 00:25:24.603 "method": "bdev_nvme_set_hotplug", 00:25:24.603 "params": { 00:25:24.603 "period_us": 100000, 00:25:24.603 "enable": false 00:25:24.603 } 00:25:24.603 }, 00:25:24.603 { 00:25:24.603 "method": "bdev_enable_histogram", 00:25:24.603 "params": { 00:25:24.603 "name": "nvme0n1", 00:25:24.603 "enable": true 00:25:24.603 } 00:25:24.603 }, 00:25:24.603 { 00:25:24.603 "method": "bdev_wait_for_examine" 00:25:24.603 } 00:25:24.603 ] 00:25:24.603 }, 00:25:24.603 { 00:25:24.603 "subsystem": "nbd", 00:25:24.603 "config": [] 00:25:24.603 } 00:25:24.603 ] 00:25:24.603 }' 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 70878 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70878 ']' 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70878 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70878 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:24.603 killing process with pid 70878 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70878' 00:25:24.603 Received shutdown signal, test time was about 1.000000 seconds 00:25:24.603 00:25:24.603 Latency(us) 00:25:24.603 [2024-11-20T07:22:48.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.603 [2024-11-20T07:22:48.806Z] =================================================================================================================== 00:25:24.603 [2024-11-20T07:22:48.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70878 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70878 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 70852 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70852 ']' 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70852 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70852 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:24.603 killing process with pid 70852 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70852' 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70852 00:25:24.603 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70852 00:25:24.864 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:24.864 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:24.864 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:24.864 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:24.864 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:24.864 "subsystems": [ 00:25:24.864 { 00:25:24.864 "subsystem": "keyring", 00:25:24.864 "config": [ 00:25:24.864 { 00:25:24.864 "method": "keyring_file_add_key", 00:25:24.864 "params": { 00:25:24.864 "name": "key0", 00:25:24.864 "path": "/tmp/tmp.VmZpFsi8I2" 00:25:24.864 } 00:25:24.864 } 00:25:24.864 ] 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "subsystem": "iobuf", 00:25:24.864 "config": [ 00:25:24.864 { 00:25:24.864 "method": "iobuf_set_options", 00:25:24.864 "params": { 00:25:24.864 "small_pool_count": 8192, 00:25:24.864 "large_pool_count": 1024, 00:25:24.864 "small_bufsize": 8192, 00:25:24.864 "large_bufsize": 135168, 00:25:24.864 "enable_numa": false 00:25:24.864 } 00:25:24.864 } 00:25:24.864 ] 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "subsystem": "sock", 00:25:24.864 "config": [ 00:25:24.864 { 00:25:24.864 "method": "sock_set_default_impl", 00:25:24.864 "params": { 00:25:24.864 "impl_name": "uring" 00:25:24.864 } 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "method": "sock_impl_set_options", 00:25:24.864 "params": { 00:25:24.864 "impl_name": "ssl", 00:25:24.864 "recv_buf_size": 4096, 00:25:24.864 "send_buf_size": 4096, 00:25:24.864 "enable_recv_pipe": true, 00:25:24.864 "enable_quickack": false, 00:25:24.864 "enable_placement_id": 0, 00:25:24.864 "enable_zerocopy_send_server": true, 00:25:24.864 "enable_zerocopy_send_client": false, 00:25:24.864 "zerocopy_threshold": 0, 00:25:24.864 "tls_version": 0, 00:25:24.864 "enable_ktls": false 00:25:24.864 } 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "method": "sock_impl_set_options", 00:25:24.864 "params": { 00:25:24.864 "impl_name": "posix", 00:25:24.864 "recv_buf_size": 2097152, 00:25:24.864 "send_buf_size": 2097152, 00:25:24.864 "enable_recv_pipe": true, 00:25:24.864 "enable_quickack": false, 00:25:24.864 "enable_placement_id": 0, 00:25:24.864 "enable_zerocopy_send_server": true, 00:25:24.864 "enable_zerocopy_send_client": false, 00:25:24.864 "zerocopy_threshold": 0, 00:25:24.864 "tls_version": 0, 00:25:24.864 "enable_ktls": false 00:25:24.864 } 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "method": "sock_impl_set_options", 00:25:24.864 "params": { 00:25:24.864 "impl_name": "uring", 00:25:24.864 "recv_buf_size": 2097152, 00:25:24.864 "send_buf_size": 2097152, 00:25:24.864 "enable_recv_pipe": true, 00:25:24.864 "enable_quickack": false, 00:25:24.864 "enable_placement_id": 0, 00:25:24.864 "enable_zerocopy_send_server": false, 00:25:24.864 "enable_zerocopy_send_client": false, 00:25:24.864 "zerocopy_threshold": 0, 00:25:24.864 "tls_version": 0, 00:25:24.864 "enable_ktls": false 00:25:24.864 } 00:25:24.864 } 00:25:24.864 ] 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "subsystem": "vmd", 00:25:24.864 "config": [] 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "subsystem": "accel", 00:25:24.864 "config": [ 00:25:24.864 { 00:25:24.864 "method": "accel_set_options", 00:25:24.864 "params": { 00:25:24.864 "small_cache_size": 128, 00:25:24.864 "large_cache_size": 16, 00:25:24.864 "task_count": 2048, 00:25:24.864 "sequence_count": 2048, 00:25:24.864 "buf_count": 2048 00:25:24.864 } 00:25:24.864 } 00:25:24.864 ] 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "subsystem": "bdev", 00:25:24.864 "config": [ 00:25:24.864 { 00:25:24.864 "method": "bdev_set_options", 00:25:24.864 "params": { 00:25:24.864 "bdev_io_pool_size": 65535, 00:25:24.864 "bdev_io_cache_size": 256, 00:25:24.864 "bdev_auto_examine": true, 00:25:24.864 "iobuf_small_cache_size": 128, 00:25:24.864 "iobuf_large_cache_size": 16 00:25:24.864 } 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "method": "bdev_raid_set_options", 00:25:24.864 "params": { 00:25:24.864 "process_window_size_kb": 1024, 00:25:24.864 "process_max_bandwidth_mb_sec": 0 00:25:24.864 } 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "method": "bdev_iscsi_set_options", 00:25:24.864 "params": { 00:25:24.864 "timeout_sec": 30 00:25:24.864 } 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "method": "bdev_nvme_set_options", 00:25:24.864 "params": { 00:25:24.864 "action_on_timeout": "none", 00:25:24.864 "timeout_us": 0, 00:25:24.864 "timeout_admin_us": 0, 00:25:24.864 "keep_alive_timeout_ms": 10000, 00:25:24.864 "arbitration_burst": 0, 00:25:24.864 "low_priority_weight": 0, 00:25:24.864 "medium_priority_weight": 0, 00:25:24.864 "high_priority_weight": 0, 00:25:24.864 "nvme_adminq_poll_period_us": 10000, 00:25:24.864 "nvme_ioq_poll_period_us": 0, 00:25:24.864 "io_queue_requests": 0, 00:25:24.864 "delay_cmd_submit": true, 00:25:24.864 "transport_retry_count": 4, 00:25:24.864 "bdev_retry_count": 3, 00:25:24.864 "transport_ack_timeout": 0, 00:25:24.864 "ctrlr_loss_timeout_sec": 0, 00:25:24.864 "reconnect_delay_sec": 0, 00:25:24.864 "fast_io_fail_timeout_sec": 0, 00:25:24.864 "disable_auto_failback": false, 00:25:24.864 "generate_uuids": false, 00:25:24.864 "transport_tos": 0, 00:25:24.864 "nvme_error_stat": false, 00:25:24.864 "rdma_srq_size": 0, 00:25:24.864 "io_path_stat": false, 00:25:24.864 "allow_accel_sequence": false, 00:25:24.864 "rdma_max_cq_size": 0, 00:25:24.864 "rdma_cm_event_timeout_ms": 0, 00:25:24.864 "dhchap_digests": [ 00:25:24.864 "sha256", 00:25:24.864 "sha384", 00:25:24.864 "sha512" 00:25:24.864 ], 00:25:24.864 "dhchap_dhgroups": [ 00:25:24.864 "null", 00:25:24.864 "ffdhe2048", 00:25:24.864 "ffdhe3072", 00:25:24.864 "ffdhe4096", 00:25:24.864 "ffdhe6144", 00:25:24.864 "ffdhe8192" 00:25:24.864 ] 00:25:24.864 } 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "method": "bdev_nvme_set_hotplug", 00:25:24.864 "params": { 00:25:24.864 "period_us": 100000, 00:25:24.864 "enable": false 00:25:24.864 } 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "method": "bdev_malloc_create", 00:25:24.864 "params": { 00:25:24.864 "name": "malloc0", 00:25:24.864 "num_blocks": 8192, 00:25:24.864 "block_size": 4096, 00:25:24.864 "physical_block_size": 4096, 00:25:24.864 "uuid": "7b5e63d1-07de-4d9f-8d9c-88bcf44c24e9", 00:25:24.864 "optimal_io_boundary": 0, 00:25:24.864 "md_size": 0, 00:25:24.864 "dif_type": 0, 00:25:24.864 "dif_is_head_of_md": false, 00:25:24.864 "dif_pi_format": 0 00:25:24.864 } 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "method": "bdev_wait_for_examine" 00:25:24.864 } 00:25:24.864 ] 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "subsystem": "nbd", 00:25:24.864 "config": [] 00:25:24.864 }, 00:25:24.864 { 00:25:24.864 "subsystem": "scheduler", 00:25:24.864 "config": [ 00:25:24.864 { 00:25:24.864 "method": "framework_set_scheduler", 00:25:24.864 "params": { 00:25:24.864 "name": "static" 00:25:24.864 } 00:25:24.864 } 00:25:24.864 ] 00:25:24.865 }, 00:25:24.865 { 00:25:24.865 "subsystem": "nvmf", 00:25:24.865 "config": [ 00:25:24.865 { 00:25:24.865 "method": "nvmf_set_config", 00:25:24.865 "params": { 00:25:24.865 "discovery_filter": "match_any", 00:25:24.865 "admin_cmd_passthru": { 00:25:24.865 "identify_ctrlr": false 00:25:24.865 }, 00:25:24.865 "dhchap_digests": [ 00:25:24.865 "sha256", 00:25:24.865 "sha384", 00:25:24.865 "sha512" 00:25:24.865 ], 00:25:24.865 "dhchap_dhgroups": [ 00:25:24.865 "null", 00:25:24.865 "ffdhe2048", 00:25:24.865 "ffdhe3072", 00:25:24.865 "ffdhe4096", 00:25:24.865 "ffdhe6144", 00:25:24.865 "ffdhe8192" 00:25:24.865 ] 00:25:24.865 } 00:25:24.865 }, 00:25:24.865 { 00:25:24.865 "method": "nvmf_set_max_subsystems", 00:25:24.865 "params": { 00:25:24.865 "max_subsystems": 1024 00:25:24.865 } 00:25:24.865 }, 00:25:24.865 { 00:25:24.865 "method": "nvmf_set_crdt", 00:25:24.865 "params": { 00:25:24.865 "crdt1": 0, 00:25:24.865 "crdt2": 0, 00:25:24.865 "crdt3": 0 00:25:24.865 } 00:25:24.865 }, 00:25:24.865 { 00:25:24.865 "method": "nvmf_create_transport", 00:25:24.865 "params": { 00:25:24.865 "trtype": "TCP", 00:25:24.865 "max_queue_depth": 128, 00:25:24.865 "max_io_qpairs_per_ctrlr": 127, 00:25:24.865 "in_capsule_data_size": 4096, 00:25:24.865 "max_io_size": 131072, 00:25:24.865 "io_unit_size": 131072, 00:25:24.865 "max_aq_depth": 128, 00:25:24.865 "num_shared_buffers": 511, 00:25:24.865 "buf_cache_size": 4294967295, 00:25:24.865 "dif_insert_or_strip": false, 00:25:24.865 "zcopy": false, 00:25:24.865 "c2h_success": false, 00:25:24.865 "sock_priority": 0, 00:25:24.865 "abort_timeout_sec": 1, 00:25:24.865 "ack_timeout": 0, 00:25:24.865 "data_wr_pool_size": 0 00:25:24.865 } 00:25:24.865 }, 00:25:24.865 { 00:25:24.865 "method": "nvmf_create_subsystem", 00:25:24.865 "params": { 00:25:24.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.865 "allow_any_host": false, 00:25:24.865 "serial_number": "00000000000000000000", 00:25:24.865 "model_number": "SPDK bdev Controller", 00:25:24.865 "max_namespaces": 32, 00:25:24.865 "min_cntlid": 1, 00:25:24.865 "max_cntlid": 65519, 00:25:24.865 "ana_reporting": false 00:25:24.865 } 00:25:24.865 }, 00:25:24.865 { 00:25:24.865 "method": "nvmf_subsystem_add_host", 00:25:24.865 "params": { 00:25:24.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.865 "host": "nqn.2016-06.io.spdk:host1", 00:25:24.865 "psk": "key0" 00:25:24.865 } 00:25:24.865 }, 00:25:24.865 { 00:25:24.865 "method": "nvmf_subsystem_add_ns", 00:25:24.865 "params": { 00:25:24.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.865 "namespace": { 00:25:24.865 "nsid": 1, 00:25:24.865 "bdev_name": "malloc0", 00:25:24.865 "nguid": "7B5E63D107DE4D9F8D9C88BCF44C24E9", 00:25:24.865 "uuid": "7b5e63d1-07de-4d9f-8d9c-88bcf44c24e9", 00:25:24.865 "no_auto_visible": false 00:25:24.865 } 00:25:24.865 } 00:25:24.865 }, 00:25:24.865 { 00:25:24.865 "method": "nvmf_subsystem_add_listener", 00:25:24.865 "params": { 00:25:24.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.865 "listen_address": { 00:25:24.865 "trtype": "TCP", 00:25:24.865 "adrfam": "IPv4", 00:25:24.865 "traddr": "10.0.0.2", 00:25:24.865 "trsvcid": "4420" 00:25:24.865 }, 00:25:24.865 "secure_channel": false, 00:25:24.865 "sock_impl": "ssl" 00:25:24.865 } 00:25:24.865 } 00:25:24.865 ] 00:25:24.865 } 00:25:24.865 ] 00:25:24.865 }' 00:25:24.865 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=70939 00:25:24.865 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 70939 00:25:24.865 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70939 ']' 00:25:24.865 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:24.865 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.865 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.865 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.865 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.865 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:24.865 [2024-11-20 07:22:48.932816] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:24.865 [2024-11-20 07:22:48.932867] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.127 [2024-11-20 07:22:49.069071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.127 [2024-11-20 07:22:49.099008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.127 [2024-11-20 07:22:49.099046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.127 [2024-11-20 07:22:49.099052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.127 [2024-11-20 07:22:49.099056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.127 [2024-11-20 07:22:49.099060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.127 [2024-11-20 07:22:49.099306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.127 [2024-11-20 07:22:49.239918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:25.127 [2024-11-20 07:22:49.298942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.418 [2024-11-20 07:22:49.330893] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:25.418 [2024-11-20 07:22:49.331019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=70965 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 70965 /var/tmp/bdevperf.sock 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70965 ']' 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:25.676 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:25.676 "subsystems": [ 00:25:25.676 { 00:25:25.676 "subsystem": "keyring", 00:25:25.676 "config": [ 00:25:25.676 { 00:25:25.676 "method": "keyring_file_add_key", 00:25:25.676 "params": { 00:25:25.676 "name": "key0", 00:25:25.676 "path": "/tmp/tmp.VmZpFsi8I2" 00:25:25.676 } 00:25:25.676 } 00:25:25.676 ] 00:25:25.676 }, 00:25:25.676 { 00:25:25.676 "subsystem": "iobuf", 00:25:25.676 "config": [ 00:25:25.676 { 00:25:25.676 "method": "iobuf_set_options", 00:25:25.676 "params": { 00:25:25.676 "small_pool_count": 8192, 00:25:25.677 "large_pool_count": 1024, 00:25:25.677 "small_bufsize": 8192, 00:25:25.677 "large_bufsize": 135168, 00:25:25.677 "enable_numa": false 00:25:25.677 } 00:25:25.677 } 00:25:25.677 ] 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "subsystem": "sock", 00:25:25.677 "config": [ 00:25:25.677 { 00:25:25.677 "method": "sock_set_default_impl", 00:25:25.677 "params": { 00:25:25.677 "impl_name": "uring" 00:25:25.677 } 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "method": "sock_impl_set_options", 00:25:25.677 "params": { 00:25:25.677 "impl_name": "ssl", 00:25:25.677 "recv_buf_size": 4096, 00:25:25.677 "send_buf_size": 4096, 00:25:25.677 "enable_recv_pipe": true, 00:25:25.677 "enable_quickack": false, 00:25:25.677 "enable_placement_id": 0, 00:25:25.677 "enable_zerocopy_send_server": true, 00:25:25.677 "enable_zerocopy_send_client": false, 00:25:25.677 "zerocopy_threshold": 0, 00:25:25.677 "tls_version": 0, 00:25:25.677 "enable_ktls": false 00:25:25.677 } 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "method": "sock_impl_set_options", 00:25:25.677 "params": { 00:25:25.677 "impl_name": "posix", 00:25:25.677 "recv_buf_size": 2097152, 00:25:25.677 "send_buf_size": 2097152, 00:25:25.677 "enable_recv_pipe": true, 00:25:25.677 "enable_quickack": false, 00:25:25.677 "enable_placement_id": 0, 00:25:25.677 "enable_zerocopy_send_server": true, 00:25:25.677 "enable_zerocopy_send_client": false, 00:25:25.677 "zerocopy_threshold": 0, 00:25:25.677 "tls_version": 0, 00:25:25.677 "enable_ktls": false 00:25:25.677 } 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "method": "sock_impl_set_options", 00:25:25.677 "params": { 00:25:25.677 "impl_name": "uring", 00:25:25.677 "recv_buf_size": 2097152, 00:25:25.677 "send_buf_size": 2097152, 00:25:25.677 "enable_recv_pipe": true, 00:25:25.677 "enable_quickack": false, 00:25:25.677 "enable_placement_id": 0, 00:25:25.677 "enable_zerocopy_send_server": false, 00:25:25.677 "enable_zerocopy_send_client": false, 00:25:25.677 "zerocopy_threshold": 0, 00:25:25.677 "tls_version": 0, 00:25:25.677 "enable_ktls": false 00:25:25.677 } 00:25:25.677 } 00:25:25.677 ] 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "subsystem": "vmd", 00:25:25.677 "config": [] 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "subsystem": "accel", 00:25:25.677 "config": [ 00:25:25.677 { 00:25:25.677 "method": "accel_set_options", 00:25:25.677 "params": { 00:25:25.677 "small_cache_size": 128, 00:25:25.677 "large_cache_size": 16, 00:25:25.677 "task_count": 2048, 00:25:25.677 "sequence_count": 2048, 00:25:25.677 "buf_count": 2048 00:25:25.677 } 00:25:25.677 } 00:25:25.677 ] 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "subsystem": "bdev", 00:25:25.677 "config": [ 00:25:25.677 { 00:25:25.677 "method": "bdev_set_options", 00:25:25.677 "params": { 00:25:25.677 "bdev_io_pool_size": 65535, 00:25:25.677 "bdev_io_cache_size": 256, 00:25:25.677 "bdev_auto_examine": true, 00:25:25.677 "iobuf_small_cache_size": 128, 00:25:25.677 "iobuf_large_cache_size": 16 00:25:25.677 } 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "method": "bdev_raid_set_options", 00:25:25.677 "params": { 00:25:25.677 "process_window_size_kb": 1024, 00:25:25.677 "process_max_bandwidth_mb_sec": 0 00:25:25.677 } 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "method": "bdev_iscsi_set_options", 00:25:25.677 "params": { 00:25:25.677 "timeout_sec": 30 00:25:25.677 } 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "method": "bdev_nvme_set_options", 00:25:25.677 "params": { 00:25:25.677 "action_on_timeout": "none", 00:25:25.677 "timeout_us": 0, 00:25:25.677 "timeout_admin_us": 0, 00:25:25.677 "keep_alive_timeout_ms": 10000, 00:25:25.677 "arbitration_burst": 0, 00:25:25.677 "low_priority_weight": 0, 00:25:25.677 "medium_priority_weight": 0, 00:25:25.677 "high_priority_weight": 0, 00:25:25.677 "nvme_adminq_poll_period_us": 10000, 00:25:25.677 "nvme_ioq_poll_period_us": 0, 00:25:25.677 "io_queue_requests": 512, 00:25:25.677 "delay_cmd_submit": true, 00:25:25.677 "transport_retry_count": 4, 00:25:25.677 "bdev_retry_count": 3, 00:25:25.677 "transport_ack_timeout": 0, 00:25:25.677 "ctrlr_loss_timeout_sec": 0, 00:25:25.677 "reconnect_delay_sec": 0, 00:25:25.677 "fast_io_fail_timeout_sec": 0, 00:25:25.677 "disable_auto_failback": false, 00:25:25.677 "generate_uuids": false, 00:25:25.677 "transport_tos": 0, 00:25:25.677 "nvme_error_stat": false, 00:25:25.677 "rdma_srq_size": 0, 00:25:25.677 "io_path_stat": false, 00:25:25.677 "allow_accel_sequence": false, 00:25:25.677 "rdma_max_cq_size": 0, 00:25:25.677 "rdma_cm_event_timeout_ms": 0, 00:25:25.677 "dhchap_digests": [ 00:25:25.677 "sha256", 00:25:25.677 "sha384", 00:25:25.677 "sha512" 00:25:25.677 ], 00:25:25.677 "dhchap_dhgroups": [ 00:25:25.677 "null", 00:25:25.677 "ffdhe2048", 00:25:25.677 "ffdhe3072", 00:25:25.677 "ffdhe4096", 00:25:25.677 "ffdhe6144", 00:25:25.677 "ffdhe8192" 00:25:25.677 ] 00:25:25.677 } 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "method": "bdev_nvme_attach_controller", 00:25:25.677 "params": { 00:25:25.677 "name": "nvme0", 00:25:25.677 "trtype": "TCP", 00:25:25.677 "adrfam": "IPv4", 00:25:25.677 "traddr": "10.0.0.2", 00:25:25.677 "trsvcid": "4420", 00:25:25.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.677 "prchk_reftag": false, 00:25:25.677 "prchk_guard": false, 00:25:25.677 "ctrlr_loss_timeout_sec": 0, 00:25:25.677 "reconnect_delay_sec": 0, 00:25:25.677 "fast_io_fail_timeout_sec": 0, 00:25:25.677 "psk": "key0", 00:25:25.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:25.677 "hdgst": false, 00:25:25.677 "ddgst": false, 00:25:25.677 "multipath": "multipath" 00:25:25.677 } 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "method": "bdev_nvme_set_hotplug", 00:25:25.677 "params": { 00:25:25.677 "period_us": 100000, 00:25:25.677 "enable": false 00:25:25.677 } 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "method": "bdev_enable_histogram", 00:25:25.677 "params": { 00:25:25.677 "name": "nvme0n1", 00:25:25.677 "enable": true 00:25:25.677 } 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "method": "bdev_wait_for_examine" 00:25:25.677 } 00:25:25.677 ] 00:25:25.677 }, 00:25:25.677 { 00:25:25.677 "subsystem": "nbd", 00:25:25.677 "config": [] 00:25:25.677 } 00:25:25.677 ] 00:25:25.677 }' 00:25:25.677 [2024-11-20 07:22:49.868200] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:25.677 [2024-11-20 07:22:49.868292] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70965 ] 00:25:25.942 [2024-11-20 07:22:50.006159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.942 [2024-11-20 07:22:50.042231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.200 [2024-11-20 07:22:50.153889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:26.200 [2024-11-20 07:22:50.189838] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:26.766 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.766 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:26.766 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.766 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:27.024 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.024 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:27.024 Running I/O for 1 seconds... 00:25:28.000 6382.00 IOPS, 24.93 MiB/s 00:25:28.001 Latency(us) 00:25:28.001 [2024-11-20T07:22:52.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.001 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:28.001 Verification LBA range: start 0x0 length 0x2000 00:25:28.001 nvme0n1 : 1.01 6450.56 25.20 0.00 0.00 19738.28 2722.26 15728.64 00:25:28.001 [2024-11-20T07:22:52.204Z] =================================================================================================================== 00:25:28.001 [2024-11-20T07:22:52.204Z] Total : 6450.56 25.20 0.00 0.00 19738.28 2722.26 15728.64 00:25:28.001 { 00:25:28.001 "results": [ 00:25:28.001 { 00:25:28.001 "job": "nvme0n1", 00:25:28.001 "core_mask": "0x2", 00:25:28.001 "workload": "verify", 00:25:28.001 "status": "finished", 00:25:28.001 "verify_range": { 00:25:28.001 "start": 0, 00:25:28.001 "length": 8192 00:25:28.001 }, 00:25:28.001 "queue_depth": 128, 00:25:28.001 "io_size": 4096, 00:25:28.001 "runtime": 1.009214, 00:25:28.001 "iops": 6450.564498708896, 00:25:28.001 "mibps": 25.197517573081626, 00:25:28.001 "io_failed": 0, 00:25:28.001 "io_timeout": 0, 00:25:28.001 "avg_latency_us": 19738.2769797944, 00:25:28.001 "min_latency_us": 2722.2646153846154, 00:25:28.001 "max_latency_us": 15728.64 00:25:28.001 } 00:25:28.001 ], 00:25:28.001 "core_count": 1 00:25:28.001 } 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:28.001 nvmf_trace.0 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 70965 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70965 ']' 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70965 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70965 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:28.001 killing process with pid 70965 00:25:28.001 Received shutdown signal, test time was about 1.000000 seconds 00:25:28.001 00:25:28.001 Latency(us) 00:25:28.001 [2024-11-20T07:22:52.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.001 [2024-11-20T07:22:52.204Z] =================================================================================================================== 00:25:28.001 [2024-11-20T07:22:52.204Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70965' 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70965 00:25:28.001 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70965 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:28.259 rmmod nvme_tcp 00:25:28.259 rmmod nvme_fabrics 00:25:28.259 rmmod nvme_keyring 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # '[' -n 70939 ']' 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # killprocess 70939 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70939 ']' 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70939 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70939 00:25:28.259 killing process with pid 70939 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70939' 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70939 00:25:28.259 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70939 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # nvmf_fini 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@254 -- # local dev 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # continue 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # continue 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@274 -- # iptr 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-save 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-restore 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NTlhK7Yymh /tmp/tmp.Cx6JdcT6gm /tmp/tmp.VmZpFsi8I2 00:25:28.518 ************************************ 00:25:28.518 END TEST nvmf_tls 00:25:28.518 ************************************ 00:25:28.518 00:25:28.518 real 1m19.028s 00:25:28.518 user 2m11.396s 00:25:28.518 sys 0m21.378s 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:28.518 07:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:28.519 07:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:28.519 07:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:28.778 ************************************ 00:25:28.778 START TEST nvmf_fips 00:25:28.778 ************************************ 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:28.778 * Looking for test storage... 00:25:28.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:28.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.778 --rc genhtml_branch_coverage=1 00:25:28.778 --rc genhtml_function_coverage=1 00:25:28.778 --rc genhtml_legend=1 00:25:28.778 --rc geninfo_all_blocks=1 00:25:28.778 --rc geninfo_unexecuted_blocks=1 00:25:28.778 00:25:28.778 ' 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:28.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.778 --rc genhtml_branch_coverage=1 00:25:28.778 --rc genhtml_function_coverage=1 00:25:28.778 --rc genhtml_legend=1 00:25:28.778 --rc geninfo_all_blocks=1 00:25:28.778 --rc geninfo_unexecuted_blocks=1 00:25:28.778 00:25:28.778 ' 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:28.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.778 --rc genhtml_branch_coverage=1 00:25:28.778 --rc genhtml_function_coverage=1 00:25:28.778 --rc genhtml_legend=1 00:25:28.778 --rc geninfo_all_blocks=1 00:25:28.778 --rc geninfo_unexecuted_blocks=1 00:25:28.778 00:25:28.778 ' 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:28.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.778 --rc genhtml_branch_coverage=1 00:25:28.778 --rc genhtml_function_coverage=1 00:25:28.778 --rc genhtml_legend=1 00:25:28.778 --rc geninfo_all_blocks=1 00:25:28.778 --rc geninfo_unexecuted_blocks=1 00:25:28.778 00:25:28.778 ' 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.778 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:28.779 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:28.779 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:25:28.780 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:28.780 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:28.780 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:25:28.780 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:28.780 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:25:28.780 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:28.780 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:25:28.780 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:25:28.780 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:25:29.039 Error setting digest 00:25:29.039 40C22D5D2D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:29.039 40C22D5D2D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # remove_target_ns 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@223 -- # create_target_ns 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # return 0 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:29.039 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up target0 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:29.040 10.0.0.1 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:29.040 10.0.0.2 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:29.040 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up target1 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772163 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:29.041 10.0.0.3 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772164 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:29.041 10.0.0.4 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:29.041 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:29.301 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:29.301 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:29.301 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.301 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:29.301 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:29.301 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:29.301 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator0 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:29.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:25:29.302 00:25:29.302 --- 10.0.0.1 ping statistics --- 00:25:29.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.302 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target0 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target0 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:29.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.021 ms 00:25:29.302 00:25:29.302 --- 10.0.0.2 ping statistics --- 00:25:29.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.302 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:29.302 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:29.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:29.303 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:25:29.303 00:25:29.303 --- 10.0.0.3 ping statistics --- 00:25:29.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.303 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:29.303 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:29.303 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.109 ms 00:25:29.303 00:25:29.303 --- 10.0.0.4 ping statistics --- 00:25:29.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.303 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # return 0 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator0 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:29.303 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target0 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target0 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target1 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target1 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target1 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:29.304 ' 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # nvmfpid=71277 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # waitforlisten 71277 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 71277 ']' 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.304 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:29.304 [2024-11-20 07:22:53.432787] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:29.304 [2024-11-20 07:22:53.432849] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.563 [2024-11-20 07:22:53.573357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.563 [2024-11-20 07:22:53.608456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.563 [2024-11-20 07:22:53.608495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.563 [2024-11-20 07:22:53.608501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.563 [2024-11-20 07:22:53.608506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.563 [2024-11-20 07:22:53.608511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.563 [2024-11-20 07:22:53.608780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.563 [2024-11-20 07:22:53.639425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:30.130 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.130 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:30.130 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:30.130 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.130 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:30.130 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.130 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:30.130 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:30.130 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:30.130 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.40a 00:25:30.130 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:30.130 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.40a 00:25:30.389 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.40a 00:25:30.389 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.40a 00:25:30.389 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:30.389 [2024-11-20 07:22:54.518742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.389 [2024-11-20 07:22:54.534689] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:30.389 [2024-11-20 07:22:54.534838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.389 malloc0 00:25:30.646 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:30.646 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=71313 00:25:30.646 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:30.646 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 71313 /var/tmp/bdevperf.sock 00:25:30.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.646 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 71313 ']' 00:25:30.646 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.647 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.647 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.647 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.647 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:30.647 [2024-11-20 07:22:54.644444] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:30.647 [2024-11-20 07:22:54.644638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71313 ] 00:25:30.647 [2024-11-20 07:22:54.784006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.647 [2024-11-20 07:22:54.821973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.912 [2024-11-20 07:22:54.853002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:31.489 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.489 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:31.489 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.40a 00:25:31.747 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:31.747 [2024-11-20 07:22:55.891852] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:32.005 TLSTESTn1 00:25:32.005 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:32.005 Running I/O for 10 seconds... 00:25:34.310 5644.00 IOPS, 22.05 MiB/s [2024-11-20T07:22:59.077Z] 5658.00 IOPS, 22.10 MiB/s [2024-11-20T07:23:00.452Z] 5751.33 IOPS, 22.47 MiB/s [2024-11-20T07:23:01.385Z] 6102.00 IOPS, 23.84 MiB/s [2024-11-20T07:23:02.350Z] 6312.60 IOPS, 24.66 MiB/s [2024-11-20T07:23:03.283Z] 6427.50 IOPS, 25.11 MiB/s [2024-11-20T07:23:04.217Z] 6514.71 IOPS, 25.45 MiB/s [2024-11-20T07:23:05.190Z] 6589.62 IOPS, 25.74 MiB/s [2024-11-20T07:23:06.198Z] 6651.56 IOPS, 25.98 MiB/s [2024-11-20T07:23:06.198Z] 6682.00 IOPS, 26.10 MiB/s 00:25:41.995 Latency(us) 00:25:41.995 [2024-11-20T07:23:06.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.995 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:41.995 Verification LBA range: start 0x0 length 0x2000 00:25:41.995 TLSTESTn1 : 10.01 6687.76 26.12 0.00 0.00 19108.21 4007.78 16736.89 00:25:41.995 [2024-11-20T07:23:06.198Z] =================================================================================================================== 00:25:41.995 [2024-11-20T07:23:06.198Z] Total : 6687.76 26.12 0.00 0.00 19108.21 4007.78 16736.89 00:25:41.995 { 00:25:41.995 "results": [ 00:25:41.995 { 00:25:41.995 "job": "TLSTESTn1", 00:25:41.995 "core_mask": "0x4", 00:25:41.995 "workload": "verify", 00:25:41.995 "status": "finished", 00:25:41.995 "verify_range": { 00:25:41.995 "start": 0, 00:25:41.995 "length": 8192 00:25:41.995 }, 00:25:41.995 "queue_depth": 128, 00:25:41.995 "io_size": 4096, 00:25:41.995 "runtime": 10.010221, 00:25:41.995 "iops": 6687.764435969995, 00:25:41.995 "mibps": 26.124079828007794, 00:25:41.995 "io_failed": 0, 00:25:41.995 "io_timeout": 0, 00:25:41.995 "avg_latency_us": 19108.21063166869, 00:25:41.995 "min_latency_us": 4007.7784615384617, 00:25:41.995 "max_latency_us": 16736.886153846153 00:25:41.995 } 00:25:41.995 ], 00:25:41.995 "core_count": 1 00:25:41.995 } 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:41.995 nvmf_trace.0 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 71313 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 71313 ']' 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 71313 00:25:41.995 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71313 00:25:42.273 killing process with pid 71313 00:25:42.273 Received shutdown signal, test time was about 10.000000 seconds 00:25:42.273 00:25:42.273 Latency(us) 00:25:42.273 [2024-11-20T07:23:06.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.273 [2024-11-20T07:23:06.476Z] =================================================================================================================== 00:25:42.273 [2024-11-20T07:23:06.476Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71313' 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 71313 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 71313 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:42.273 rmmod nvme_tcp 00:25:42.273 rmmod nvme_fabrics 00:25:42.273 rmmod nvme_keyring 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:25:42.273 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # '[' -n 71277 ']' 00:25:42.274 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # killprocess 71277 00:25:42.274 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 71277 ']' 00:25:42.274 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 71277 00:25:42.274 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:42.274 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.274 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71277 00:25:42.274 killing process with pid 71277 00:25:42.274 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:42.274 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:42.274 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71277' 00:25:42.274 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 71277 00:25:42.274 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 71277 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # nvmf_fini 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@254 -- # local dev 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # continue 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # continue 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@274 -- # iptr 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-save 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-restore 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.40a 00:25:42.532 00:25:42.532 real 0m13.968s 00:25:42.532 user 0m20.549s 00:25:42.532 sys 0m4.520s 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:42.532 ************************************ 00:25:42.532 END TEST nvmf_fips 00:25:42.532 ************************************ 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:42.532 ************************************ 00:25:42.532 START TEST nvmf_control_msg_list 00:25:42.532 ************************************ 00:25:42.532 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:42.793 * Looking for test storage... 00:25:42.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:42.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.793 --rc genhtml_branch_coverage=1 00:25:42.793 --rc genhtml_function_coverage=1 00:25:42.793 --rc genhtml_legend=1 00:25:42.793 --rc geninfo_all_blocks=1 00:25:42.793 --rc geninfo_unexecuted_blocks=1 00:25:42.793 00:25:42.793 ' 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:42.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.793 --rc genhtml_branch_coverage=1 00:25:42.793 --rc genhtml_function_coverage=1 00:25:42.793 --rc genhtml_legend=1 00:25:42.793 --rc geninfo_all_blocks=1 00:25:42.793 --rc geninfo_unexecuted_blocks=1 00:25:42.793 00:25:42.793 ' 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:42.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.793 --rc genhtml_branch_coverage=1 00:25:42.793 --rc genhtml_function_coverage=1 00:25:42.793 --rc genhtml_legend=1 00:25:42.793 --rc geninfo_all_blocks=1 00:25:42.793 --rc geninfo_unexecuted_blocks=1 00:25:42.793 00:25:42.793 ' 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:42.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.793 --rc genhtml_branch_coverage=1 00:25:42.793 --rc genhtml_function_coverage=1 00:25:42.793 --rc genhtml_legend=1 00:25:42.793 --rc geninfo_all_blocks=1 00:25:42.793 --rc geninfo_unexecuted_blocks=1 00:25:42.793 00:25:42.793 ' 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:42.793 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:42.794 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # remove_target_ns 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@223 -- # create_target_ns 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # return 0 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:42.794 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up target0 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:42.795 10.0.0.1 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:42.795 10.0.0.2 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:42.795 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:43.056 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up target1 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772163 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:43.056 10.0.0.3 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772164 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:43.056 10.0.0.4 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.056 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator0 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:43.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:25:43.057 00:25:43.057 --- 10.0.0.1 ping statistics --- 00:25:43.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.057 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target0 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target0 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:43.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:25:43.057 00:25:43.057 --- 10.0.0.2 ping statistics --- 00:25:43.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.057 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator1 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:43.057 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:43.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:43.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:25:43.058 00:25:43.058 --- 10.0.0.3 ping statistics --- 00:25:43.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.058 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:43.058 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:43.058 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:25:43.058 00:25:43.058 --- 10.0.0.4 ping statistics --- 00:25:43.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.058 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # return 0 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator0 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target0 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target0 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target1 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:43.058 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:43.059 ' 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # nvmfpid=71702 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # waitforlisten 71702 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 71702 ']' 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.059 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:43.059 [2024-11-20 07:23:07.221825] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:43.059 [2024-11-20 07:23:07.221874] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.316 [2024-11-20 07:23:07.361070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.316 [2024-11-20 07:23:07.396029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.316 [2024-11-20 07:23:07.396069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.316 [2024-11-20 07:23:07.396075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.316 [2024-11-20 07:23:07.396080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.316 [2024-11-20 07:23:07.396085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.316 [2024-11-20 07:23:07.396357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.316 [2024-11-20 07:23:07.427940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:44.250 [2024-11-20 07:23:08.128832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:44.250 Malloc0 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:44.250 [2024-11-20 07:23:08.163377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=71728 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=71729 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=71730 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 71728 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:44.250 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:44.250 [2024-11-20 07:23:08.331619] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:44.250 [2024-11-20 07:23:08.341787] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:44.250 [2024-11-20 07:23:08.341929] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:45.183 Initializing NVMe Controllers 00:25:45.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:45.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:45.183 Initialization complete. Launching workers. 00:25:45.183 ======================================================== 00:25:45.183 Latency(us) 00:25:45.183 Device Information : IOPS MiB/s Average min max 00:25:45.183 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4413.00 17.24 226.32 105.22 969.73 00:25:45.183 ======================================================== 00:25:45.183 Total : 4413.00 17.24 226.32 105.22 969.73 00:25:45.183 00:25:45.183 Initializing NVMe Controllers 00:25:45.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:45.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:45.183 Initialization complete. Launching workers. 00:25:45.183 ======================================================== 00:25:45.183 Latency(us) 00:25:45.183 Device Information : IOPS MiB/s Average min max 00:25:45.183 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4201.00 16.41 237.64 143.48 514.04 00:25:45.183 ======================================================== 00:25:45.183 Total : 4201.00 16.41 237.64 143.48 514.04 00:25:45.183 00:25:45.183 Initializing NVMe Controllers 00:25:45.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:45.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:45.183 Initialization complete. Launching workers. 00:25:45.183 ======================================================== 00:25:45.183 Latency(us) 00:25:45.183 Device Information : IOPS MiB/s Average min max 00:25:45.183 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4435.00 17.32 225.20 107.18 356.12 00:25:45.183 ======================================================== 00:25:45.183 Total : 4435.00 17.32 225.20 107.18 356.12 00:25:45.183 00:25:45.183 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 71729 00:25:45.183 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 71730 00:25:45.183 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:45.183 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:45.183 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:45.183 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:45.441 rmmod nvme_tcp 00:25:45.441 rmmod nvme_fabrics 00:25:45.441 rmmod nvme_keyring 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # '[' -n 71702 ']' 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # killprocess 71702 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 71702 ']' 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 71702 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71702 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:45.441 killing process with pid 71702 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71702' 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 71702 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 71702 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # nvmf_fini 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@254 -- # local dev 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:45.441 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:45.699 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:45.699 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:45.699 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:45.699 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # continue 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # continue 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@274 -- # iptr 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-save 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-restore 00:25:45.700 00:25:45.700 real 0m3.068s 00:25:45.700 user 0m5.478s 00:25:45.700 sys 0m0.974s 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:45.700 ************************************ 00:25:45.700 END TEST nvmf_control_msg_list 00:25:45.700 ************************************ 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:45.700 ************************************ 00:25:45.700 START TEST nvmf_wait_for_buf 00:25:45.700 ************************************ 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:45.700 * Looking for test storage... 00:25:45.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:45.700 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:45.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.961 --rc genhtml_branch_coverage=1 00:25:45.961 --rc genhtml_function_coverage=1 00:25:45.961 --rc genhtml_legend=1 00:25:45.961 --rc geninfo_all_blocks=1 00:25:45.961 --rc geninfo_unexecuted_blocks=1 00:25:45.961 00:25:45.961 ' 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:45.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.961 --rc genhtml_branch_coverage=1 00:25:45.961 --rc genhtml_function_coverage=1 00:25:45.961 --rc genhtml_legend=1 00:25:45.961 --rc geninfo_all_blocks=1 00:25:45.961 --rc geninfo_unexecuted_blocks=1 00:25:45.961 00:25:45.961 ' 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:45.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.961 --rc genhtml_branch_coverage=1 00:25:45.961 --rc genhtml_function_coverage=1 00:25:45.961 --rc genhtml_legend=1 00:25:45.961 --rc geninfo_all_blocks=1 00:25:45.961 --rc geninfo_unexecuted_blocks=1 00:25:45.961 00:25:45.961 ' 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:45.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.961 --rc genhtml_branch_coverage=1 00:25:45.961 --rc genhtml_function_coverage=1 00:25:45.961 --rc genhtml_legend=1 00:25:45.961 --rc geninfo_all_blocks=1 00:25:45.961 --rc geninfo_unexecuted_blocks=1 00:25:45.961 00:25:45.961 ' 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:45.961 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:45.962 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # remove_target_ns 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@223 -- # create_target_ns 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # return 0 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:45.962 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up target0 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:45.962 10.0.0.1 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:45.962 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:45.963 10.0.0.2 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up target1 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772163 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:45.963 10.0.0.3 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:45.963 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:45.964 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772164 00:25:45.964 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:45.964 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:45.964 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:46.225 10.0.0.4 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator0 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:46.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:25:46.225 00:25:46.225 --- 10.0.0.1 ping statistics --- 00:25:46.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.225 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:46.225 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target0 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target0 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:46.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:25:46.226 00:25:46.226 --- 10.0.0.2 ping statistics --- 00:25:46.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.226 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:46.226 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:46.226 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:25:46.226 00:25:46.226 --- 10.0.0.3 ping statistics --- 00:25:46.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.226 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:46.226 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:46.226 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:46.226 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.147 ms 00:25:46.226 00:25:46.226 --- 10.0.0.4 ping statistics --- 00:25:46.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.226 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # return 0 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator0 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target0 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target0 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target1 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:46.227 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:46.228 ' 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # nvmfpid=71967 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # waitforlisten 71967 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 71967 ']' 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:46.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:46.228 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:46.228 [2024-11-20 07:23:10.374028] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:46.228 [2024-11-20 07:23:10.374094] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.496 [2024-11-20 07:23:10.512824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.496 [2024-11-20 07:23:10.548369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.496 [2024-11-20 07:23:10.548412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.496 [2024-11-20 07:23:10.548418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.496 [2024-11-20 07:23:10.548422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.496 [2024-11-20 07:23:10.548425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.496 [2024-11-20 07:23:10.548758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.062 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.062 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:47.062 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:47.062 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:47.062 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.062 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.062 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:47.062 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:25:47.062 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:47.062 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.062 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.321 [2024-11-20 07:23:11.297285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.321 Malloc0 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.321 [2024-11-20 07:23:11.339933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.321 [2024-11-20 07:23:11.363997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.321 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:47.579 [2024-11-20 07:23:11.544298] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:48.952 Initializing NVMe Controllers 00:25:48.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:48.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:48.952 Initialization complete. Launching workers. 00:25:48.952 ======================================================== 00:25:48.952 Latency(us) 00:25:48.952 Device Information : IOPS MiB/s Average min max 00:25:48.952 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.00 63.00 7992.10 6257.38 8999.93 00:25:48.952 ======================================================== 00:25:48.952 Total : 504.00 63.00 7992.10 6257.38 8999.93 00:25:48.952 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:48.952 rmmod nvme_tcp 00:25:48.952 rmmod nvme_fabrics 00:25:48.952 rmmod nvme_keyring 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:48.952 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # '[' -n 71967 ']' 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # killprocess 71967 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 71967 ']' 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 71967 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71967 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:48.953 killing process with pid 71967 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71967' 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 71967 00:25:48.953 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 71967 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # nvmf_fini 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@254 -- # local dev 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:48.953 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # continue 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:49.211 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # continue 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@274 -- # iptr 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-save 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-restore 00:25:49.212 00:25:49.212 real 0m3.454s 00:25:49.212 user 0m3.085s 00:25:49.212 sys 0m0.618s 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:49.212 ************************************ 00:25:49.212 END TEST nvmf_wait_for_buf 00:25:49.212 ************************************ 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:49.212 ************************************ 00:25:49.212 START TEST nvmf_nsid 00:25:49.212 ************************************ 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:49.212 * Looking for test storage... 00:25:49.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:25:49.212 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:49.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.471 --rc genhtml_branch_coverage=1 00:25:49.471 --rc genhtml_function_coverage=1 00:25:49.471 --rc genhtml_legend=1 00:25:49.471 --rc geninfo_all_blocks=1 00:25:49.471 --rc geninfo_unexecuted_blocks=1 00:25:49.471 00:25:49.471 ' 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:49.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.471 --rc genhtml_branch_coverage=1 00:25:49.471 --rc genhtml_function_coverage=1 00:25:49.471 --rc genhtml_legend=1 00:25:49.471 --rc geninfo_all_blocks=1 00:25:49.471 --rc geninfo_unexecuted_blocks=1 00:25:49.471 00:25:49.471 ' 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:49.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.471 --rc genhtml_branch_coverage=1 00:25:49.471 --rc genhtml_function_coverage=1 00:25:49.471 --rc genhtml_legend=1 00:25:49.471 --rc geninfo_all_blocks=1 00:25:49.471 --rc geninfo_unexecuted_blocks=1 00:25:49.471 00:25:49.471 ' 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:49.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.471 --rc genhtml_branch_coverage=1 00:25:49.471 --rc genhtml_function_coverage=1 00:25:49.471 --rc genhtml_legend=1 00:25:49.471 --rc geninfo_all_blocks=1 00:25:49.471 --rc geninfo_unexecuted_blocks=1 00:25:49.471 00:25:49.471 ' 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:25:49.471 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:49.472 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@223 -- # create_target_ns 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # return 0 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up target0 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:49.472 10.0.0.1 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:49.472 10.0.0.2 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:49.472 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up target1 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772163 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:49.473 10.0.0.3 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772164 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:49.473 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:49.733 10.0.0.4 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:49.733 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator0 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:49.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:25:49.734 00:25:49.734 --- 10.0.0.1 ping statistics --- 00:25:49.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.734 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target0 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target0 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:49.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:25:49.734 00:25:49.734 --- 10.0.0.2 ping statistics --- 00:25:49.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.734 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:49.734 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:49.734 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:25:49.734 00:25:49.734 --- 10.0.0.3 ping statistics --- 00:25:49.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.734 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:49.734 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:49.734 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.124 ms 00:25:49.734 00:25:49.734 --- 10.0.0.4 ping statistics --- 00:25:49.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.734 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # return 0 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:49.734 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator0 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target0 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target0 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:49.735 ' 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=72239 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 72239 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 72239 ']' 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.735 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:49.735 [2024-11-20 07:23:13.885140] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:49.735 [2024-11-20 07:23:13.885253] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.993 [2024-11-20 07:23:14.030315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.993 [2024-11-20 07:23:14.073395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.993 [2024-11-20 07:23:14.073440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.993 [2024-11-20 07:23:14.073447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.993 [2024-11-20 07:23:14.073452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.993 [2024-11-20 07:23:14.073457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.993 [2024-11-20 07:23:14.073710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.993 [2024-11-20 07:23:14.107035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=72271 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator0 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:50.924 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=06f07809-f184-4cda-b544-2f2edcfb7684 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=962a05f4-66ec-4ae1-9baa-291cfc975b2e 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=fcc6a92d-3a90-4abd-bb4e-f39490a7342d 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:50.925 null0 00:25:50.925 null1 00:25:50.925 [2024-11-20 07:23:14.836773] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:50.925 [2024-11-20 07:23:14.836831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72271 ] 00:25:50.925 null2 00:25:50.925 [2024-11-20 07:23:14.848121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.925 [2024-11-20 07:23:14.872206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 72271 /var/tmp/tgt2.sock 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 72271 ']' 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.925 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:50.925 [2024-11-20 07:23:14.974548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.925 [2024-11-20 07:23:15.013274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.925 [2024-11-20 07:23:15.061533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:51.183 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:51.183 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:51.183 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:25:51.441 [2024-11-20 07:23:15.524594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.441 [2024-11-20 07:23:15.540718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:25:51.441 nvme0n1 nvme0n2 00:25:51.441 nvme1n1 00:25:51.441 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:25:51.441 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:25:51.441 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid=6878406f-1821-4d15-bee4-f9cf994eb227 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:25:51.699 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 06f07809-f184-4cda-b544-2f2edcfb7684 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=06f07809f1844cdab5442f2edcfb7684 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 06F07809F1844CDAB5442F2EDCFB7684 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 06F07809F1844CDAB5442F2EDCFB7684 == \0\6\F\0\7\8\0\9\F\1\8\4\4\C\D\A\B\5\4\4\2\F\2\E\D\C\F\B\7\6\8\4 ]] 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 962a05f4-66ec-4ae1-9baa-291cfc975b2e 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:52.633 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=962a05f466ec4ae19baa291cfc975b2e 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 962A05F466EC4AE19BAA291CFC975B2E 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 962A05F466EC4AE19BAA291CFC975B2E == \9\6\2\A\0\5\F\4\6\6\E\C\4\A\E\1\9\B\A\A\2\9\1\C\F\C\9\7\5\B\2\E ]] 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid fcc6a92d-3a90-4abd-bb4e-f39490a7342d 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:25:52.890 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:25:52.891 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:25:52.891 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:52.891 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fcc6a92d3a904abdbb4ef39490a7342d 00:25:52.891 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FCC6A92D3A904ABDBB4EF39490A7342D 00:25:52.891 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ FCC6A92D3A904ABDBB4EF39490A7342D == \F\C\C\6\A\9\2\D\3\A\9\0\4\A\B\D\B\B\4\E\F\3\9\4\9\0\A\7\3\4\2\D ]] 00:25:52.891 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 72271 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 72271 ']' 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 72271 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72271 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:52.891 killing process with pid 72271 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72271' 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 72271 00:25:52.891 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 72271 00:25:53.148 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:25:53.148 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:53.148 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:25:53.148 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:53.148 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:25:53.148 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:53.148 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:53.148 rmmod nvme_tcp 00:25:53.148 rmmod nvme_fabrics 00:25:53.406 rmmod nvme_keyring 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 72239 ']' 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 72239 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 72239 ']' 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 72239 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72239 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72239' 00:25:53.406 killing process with pid 72239 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 72239 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 72239 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@254 -- # local dev 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:53.406 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # continue 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # continue 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@274 -- # iptr 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-save 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-restore 00:25:53.665 00:25:53.665 real 0m4.387s 00:25:53.665 user 0m6.370s 00:25:53.665 sys 0m1.339s 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:53.665 ************************************ 00:25:53.665 END TEST nvmf_nsid 00:25:53.665 ************************************ 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:25:53.665 ************************************ 00:25:53.665 END TEST nvmf_target_extra 00:25:53.665 ************************************ 00:25:53.665 00:25:53.665 real 4m18.799s 00:25:53.665 user 8m53.733s 00:25:53.665 sys 0m49.690s 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.665 07:23:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:53.665 07:23:17 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:53.665 07:23:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.665 07:23:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.665 07:23:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:53.665 ************************************ 00:25:53.665 START TEST nvmf_host 00:25:53.665 ************************************ 00:25:53.665 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:53.665 * Looking for test storage... 00:25:53.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:25:53.665 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.665 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.665 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:53.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.925 --rc genhtml_branch_coverage=1 00:25:53.925 --rc genhtml_function_coverage=1 00:25:53.925 --rc genhtml_legend=1 00:25:53.925 --rc geninfo_all_blocks=1 00:25:53.925 --rc geninfo_unexecuted_blocks=1 00:25:53.925 00:25:53.925 ' 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:53.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.925 --rc genhtml_branch_coverage=1 00:25:53.925 --rc genhtml_function_coverage=1 00:25:53.925 --rc genhtml_legend=1 00:25:53.925 --rc geninfo_all_blocks=1 00:25:53.925 --rc geninfo_unexecuted_blocks=1 00:25:53.925 00:25:53.925 ' 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:53.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.925 --rc genhtml_branch_coverage=1 00:25:53.925 --rc genhtml_function_coverage=1 00:25:53.925 --rc genhtml_legend=1 00:25:53.925 --rc geninfo_all_blocks=1 00:25:53.925 --rc geninfo_unexecuted_blocks=1 00:25:53.925 00:25:53.925 ' 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:53.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.925 --rc genhtml_branch_coverage=1 00:25:53.925 --rc genhtml_function_coverage=1 00:25:53.925 --rc genhtml_legend=1 00:25:53.925 --rc geninfo_all_blocks=1 00:25:53.925 --rc geninfo_unexecuted_blocks=1 00:25:53.925 00:25:53.925 ' 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.925 07:23:17 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:53.926 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.926 ************************************ 00:25:53.926 START TEST nvmf_identify 00:25:53.926 ************************************ 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:53.926 * Looking for test storage... 00:25:53.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.926 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:53.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.926 --rc genhtml_branch_coverage=1 00:25:53.926 --rc genhtml_function_coverage=1 00:25:53.926 --rc genhtml_legend=1 00:25:53.926 --rc geninfo_all_blocks=1 00:25:53.926 --rc geninfo_unexecuted_blocks=1 00:25:53.926 00:25:53.926 ' 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:53.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.926 --rc genhtml_branch_coverage=1 00:25:53.926 --rc genhtml_function_coverage=1 00:25:53.926 --rc genhtml_legend=1 00:25:53.926 --rc geninfo_all_blocks=1 00:25:53.926 --rc geninfo_unexecuted_blocks=1 00:25:53.926 00:25:53.926 ' 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:53.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.926 --rc genhtml_branch_coverage=1 00:25:53.926 --rc genhtml_function_coverage=1 00:25:53.926 --rc genhtml_legend=1 00:25:53.926 --rc geninfo_all_blocks=1 00:25:53.926 --rc geninfo_unexecuted_blocks=1 00:25:53.926 00:25:53.926 ' 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:53.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.926 --rc genhtml_branch_coverage=1 00:25:53.926 --rc genhtml_function_coverage=1 00:25:53.926 --rc genhtml_legend=1 00:25:53.926 --rc geninfo_all_blocks=1 00:25:53.926 --rc geninfo_unexecuted_blocks=1 00:25:53.926 00:25:53.926 ' 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:25:53.926 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:53.927 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@223 -- # create_target_ns 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # return 0 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:53.927 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up target0 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:53.928 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:54.188 10.0.0.1 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:54.188 10.0.0.2 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:54.188 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up target1 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772163 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:54.189 10.0.0.3 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772164 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:54.189 10.0.0.4 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:54.189 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator0 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:54.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:25:54.190 00:25:54.190 --- 10.0.0.1 ping statistics --- 00:25:54.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.190 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target0 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target0 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:54.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:25:54.190 00:25:54.190 --- 10.0.0.2 ping statistics --- 00:25:54.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.190 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:54.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:54.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:25:54.190 00:25:54.190 --- 10.0.0.3 ping statistics --- 00:25:54.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.190 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:54.190 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:54.190 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:25:54.190 00:25:54.190 --- 10.0.0.4 ping statistics --- 00:25:54.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.190 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # return 0 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:54.190 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator0 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:54.191 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator1 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target0 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target0 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target1 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target1 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target1 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:54.449 ' 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=72622 00:25:54.449 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:54.450 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:54.450 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 72622 00:25:54.450 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 72622 ']' 00:25:54.450 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.450 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.450 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.450 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.450 07:23:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:54.450 [2024-11-20 07:23:18.476496] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:54.450 [2024-11-20 07:23:18.476549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.450 [2024-11-20 07:23:18.610630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:54.707 [2024-11-20 07:23:18.650052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.707 [2024-11-20 07:23:18.650089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.707 [2024-11-20 07:23:18.650097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.707 [2024-11-20 07:23:18.650104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.707 [2024-11-20 07:23:18.650110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.707 [2024-11-20 07:23:18.650778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.707 [2024-11-20 07:23:18.650825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.707 [2024-11-20 07:23:18.650886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:54.707 [2024-11-20 07:23:18.650891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.707 [2024-11-20 07:23:18.685408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:55.271 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.271 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:55.271 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:55.271 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.271 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:55.271 [2024-11-20 07:23:19.363628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:55.272 Malloc0 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:55.272 [2024-11-20 07:23:19.451372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.272 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:55.533 [ 00:25:55.533 { 00:25:55.533 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:55.533 "subtype": "Discovery", 00:25:55.533 "listen_addresses": [ 00:25:55.533 { 00:25:55.533 "trtype": "TCP", 00:25:55.533 "adrfam": "IPv4", 00:25:55.533 "traddr": "10.0.0.2", 00:25:55.533 "trsvcid": "4420" 00:25:55.533 } 00:25:55.533 ], 00:25:55.533 "allow_any_host": true, 00:25:55.533 "hosts": [] 00:25:55.533 }, 00:25:55.533 { 00:25:55.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:55.533 "subtype": "NVMe", 00:25:55.533 "listen_addresses": [ 00:25:55.533 { 00:25:55.533 "trtype": "TCP", 00:25:55.533 "adrfam": "IPv4", 00:25:55.533 "traddr": "10.0.0.2", 00:25:55.533 "trsvcid": "4420" 00:25:55.533 } 00:25:55.533 ], 00:25:55.533 "allow_any_host": true, 00:25:55.533 "hosts": [], 00:25:55.533 "serial_number": "SPDK00000000000001", 00:25:55.533 "model_number": "SPDK bdev Controller", 00:25:55.533 "max_namespaces": 32, 00:25:55.533 "min_cntlid": 1, 00:25:55.533 "max_cntlid": 65519, 00:25:55.533 "namespaces": [ 00:25:55.533 { 00:25:55.533 "nsid": 1, 00:25:55.533 "bdev_name": "Malloc0", 00:25:55.533 "name": "Malloc0", 00:25:55.533 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:55.533 "eui64": "ABCDEF0123456789", 00:25:55.533 "uuid": "71eee607-ef40-46e5-9605-68a03e4a198b" 00:25:55.533 } 00:25:55.533 ] 00:25:55.533 } 00:25:55.533 ] 00:25:55.533 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.534 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:55.534 [2024-11-20 07:23:19.497466] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:55.534 [2024-11-20 07:23:19.497660] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72657 ] 00:25:55.534 [2024-11-20 07:23:19.638166] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:55.534 [2024-11-20 07:23:19.638212] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:55.534 [2024-11-20 07:23:19.638215] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:55.534 [2024-11-20 07:23:19.642229] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:55.534 [2024-11-20 07:23:19.642243] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:55.534 [2024-11-20 07:23:19.642454] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:55.534 [2024-11-20 07:23:19.642493] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f32750 0 00:25:55.534 [2024-11-20 07:23:19.650232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:55.534 [2024-11-20 07:23:19.650248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:55.534 [2024-11-20 07:23:19.650251] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:55.534 [2024-11-20 07:23:19.650253] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:55.534 [2024-11-20 07:23:19.650273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.650277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.650279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f32750) 00:25:55.534 [2024-11-20 07:23:19.650291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:55.534 [2024-11-20 07:23:19.650310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96740, cid 0, qid 0 00:25:55.534 [2024-11-20 07:23:19.658234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.534 [2024-11-20 07:23:19.658248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.534 [2024-11-20 07:23:19.658251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96740) on tqpair=0x1f32750 00:25:55.534 [2024-11-20 07:23:19.658261] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:55.534 [2024-11-20 07:23:19.658267] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:55.534 [2024-11-20 07:23:19.658270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:55.534 [2024-11-20 07:23:19.658280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f32750) 00:25:55.534 [2024-11-20 07:23:19.658290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.534 [2024-11-20 07:23:19.658304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96740, cid 0, qid 0 00:25:55.534 [2024-11-20 07:23:19.658350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.534 [2024-11-20 07:23:19.658354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.534 [2024-11-20 07:23:19.658356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96740) on tqpair=0x1f32750 00:25:55.534 [2024-11-20 07:23:19.658362] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:55.534 [2024-11-20 07:23:19.658366] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:55.534 [2024-11-20 07:23:19.658370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f32750) 00:25:55.534 [2024-11-20 07:23:19.658378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.534 [2024-11-20 07:23:19.658386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96740, cid 0, qid 0 00:25:55.534 [2024-11-20 07:23:19.658422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.534 [2024-11-20 07:23:19.658425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.534 [2024-11-20 07:23:19.658427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96740) on tqpair=0x1f32750 00:25:55.534 [2024-11-20 07:23:19.658433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:55.534 [2024-11-20 07:23:19.658437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:55.534 [2024-11-20 07:23:19.658441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f32750) 00:25:55.534 [2024-11-20 07:23:19.658449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.534 [2024-11-20 07:23:19.658457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96740, cid 0, qid 0 00:25:55.534 [2024-11-20 07:23:19.658492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.534 [2024-11-20 07:23:19.658496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.534 [2024-11-20 07:23:19.658498] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96740) on tqpair=0x1f32750 00:25:55.534 [2024-11-20 07:23:19.658503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:55.534 [2024-11-20 07:23:19.658508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f32750) 00:25:55.534 [2024-11-20 07:23:19.658516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.534 [2024-11-20 07:23:19.658524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96740, cid 0, qid 0 00:25:55.534 [2024-11-20 07:23:19.658563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.534 [2024-11-20 07:23:19.658567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.534 [2024-11-20 07:23:19.658568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96740) on tqpair=0x1f32750 00:25:55.534 [2024-11-20 07:23:19.658573] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:55.534 [2024-11-20 07:23:19.658576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:55.534 [2024-11-20 07:23:19.658581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:55.534 [2024-11-20 07:23:19.658684] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:55.534 [2024-11-20 07:23:19.658691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:55.534 [2024-11-20 07:23:19.658697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.534 [2024-11-20 07:23:19.658699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.658701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f32750) 00:25:55.535 [2024-11-20 07:23:19.658706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.535 [2024-11-20 07:23:19.658714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96740, cid 0, qid 0 00:25:55.535 [2024-11-20 07:23:19.658753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.535 [2024-11-20 07:23:19.658764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.535 [2024-11-20 07:23:19.658766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.658768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96740) on tqpair=0x1f32750 00:25:55.535 [2024-11-20 07:23:19.658771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:55.535 [2024-11-20 07:23:19.658777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.658779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.658781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f32750) 00:25:55.535 [2024-11-20 07:23:19.658786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.535 [2024-11-20 07:23:19.658794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96740, cid 0, qid 0 00:25:55.535 [2024-11-20 07:23:19.658827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.535 [2024-11-20 07:23:19.658831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.535 [2024-11-20 07:23:19.658832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.658835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96740) on tqpair=0x1f32750 00:25:55.535 [2024-11-20 07:23:19.658837] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:55.535 [2024-11-20 07:23:19.658840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:55.535 [2024-11-20 07:23:19.658845] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:55.535 [2024-11-20 07:23:19.658851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:55.535 [2024-11-20 07:23:19.658857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.658859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f32750) 00:25:55.535 [2024-11-20 07:23:19.658863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.535 [2024-11-20 07:23:19.658872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96740, cid 0, qid 0 00:25:55.535 [2024-11-20 07:23:19.658931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.535 [2024-11-20 07:23:19.658935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.535 [2024-11-20 07:23:19.658937] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.658940] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f32750): datao=0, datal=4096, cccid=0 00:25:55.535 [2024-11-20 07:23:19.658943] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f96740) on tqpair(0x1f32750): expected_datao=0, payload_size=4096 00:25:55.535 [2024-11-20 07:23:19.658946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.658951] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.658954] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.658959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.535 [2024-11-20 07:23:19.658963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.535 [2024-11-20 07:23:19.658965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.658967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96740) on tqpair=0x1f32750 00:25:55.535 [2024-11-20 07:23:19.658972] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:55.535 [2024-11-20 07:23:19.658975] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:55.535 [2024-11-20 07:23:19.658977] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:55.535 [2024-11-20 07:23:19.658980] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:55.535 [2024-11-20 07:23:19.658983] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:55.535 [2024-11-20 07:23:19.658985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:55.535 [2024-11-20 07:23:19.658992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:55.535 [2024-11-20 07:23:19.658996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.658998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f32750) 00:25:55.535 [2024-11-20 07:23:19.659005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:55.535 [2024-11-20 07:23:19.659013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96740, cid 0, qid 0 00:25:55.535 [2024-11-20 07:23:19.659055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.535 [2024-11-20 07:23:19.659059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.535 [2024-11-20 07:23:19.659061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96740) on tqpair=0x1f32750 00:25:55.535 [2024-11-20 07:23:19.659069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f32750) 00:25:55.535 [2024-11-20 07:23:19.659076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.535 [2024-11-20 07:23:19.659081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f32750) 00:25:55.535 [2024-11-20 07:23:19.659088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.535 [2024-11-20 07:23:19.659092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f32750) 00:25:55.535 [2024-11-20 07:23:19.659100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.535 [2024-11-20 07:23:19.659104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.535 [2024-11-20 07:23:19.659111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.535 [2024-11-20 07:23:19.659114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:55.535 [2024-11-20 07:23:19.659120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:55.535 [2024-11-20 07:23:19.659124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f32750) 00:25:55.535 [2024-11-20 07:23:19.659131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.535 [2024-11-20 07:23:19.659140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96740, cid 0, qid 0 00:25:55.535 [2024-11-20 07:23:19.659143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f968c0, cid 1, qid 0 00:25:55.535 [2024-11-20 07:23:19.659146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96a40, cid 2, qid 0 00:25:55.535 [2024-11-20 07:23:19.659149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.535 [2024-11-20 07:23:19.659152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96d40, cid 4, qid 0 00:25:55.535 [2024-11-20 07:23:19.659242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.535 [2024-11-20 07:23:19.659246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.535 [2024-11-20 07:23:19.659249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96d40) on tqpair=0x1f32750 00:25:55.535 [2024-11-20 07:23:19.659254] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:55.535 [2024-11-20 07:23:19.659257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:55.535 [2024-11-20 07:23:19.659263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.535 [2024-11-20 07:23:19.659266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f32750) 00:25:55.535 [2024-11-20 07:23:19.659270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.535 [2024-11-20 07:23:19.659279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96d40, cid 4, qid 0 00:25:55.535 [2024-11-20 07:23:19.659325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.535 [2024-11-20 07:23:19.659329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.536 [2024-11-20 07:23:19.659331] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659333] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f32750): datao=0, datal=4096, cccid=4 00:25:55.536 [2024-11-20 07:23:19.659335] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f96d40) on tqpair(0x1f32750): expected_datao=0, payload_size=4096 00:25:55.536 [2024-11-20 07:23:19.659338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659342] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659344] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.536 [2024-11-20 07:23:19.659353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.536 [2024-11-20 07:23:19.659355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96d40) on tqpair=0x1f32750 00:25:55.536 [2024-11-20 07:23:19.659365] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:55.536 [2024-11-20 07:23:19.659383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f32750) 00:25:55.536 [2024-11-20 07:23:19.659390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.536 [2024-11-20 07:23:19.659394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f32750) 00:25:55.536 [2024-11-20 07:23:19.659402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.536 [2024-11-20 07:23:19.659414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96d40, cid 4, qid 0 00:25:55.536 [2024-11-20 07:23:19.659418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96ec0, cid 5, qid 0 00:25:55.536 [2024-11-20 07:23:19.659497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.536 [2024-11-20 07:23:19.659500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.536 [2024-11-20 07:23:19.659502] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659504] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f32750): datao=0, datal=1024, cccid=4 00:25:55.536 [2024-11-20 07:23:19.659507] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f96d40) on tqpair(0x1f32750): expected_datao=0, payload_size=1024 00:25:55.536 [2024-11-20 07:23:19.659509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659513] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659515] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.536 [2024-11-20 07:23:19.659522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.536 [2024-11-20 07:23:19.659524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96ec0) on tqpair=0x1f32750 00:25:55.536 [2024-11-20 07:23:19.659536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.536 [2024-11-20 07:23:19.659540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.536 [2024-11-20 07:23:19.659542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96d40) on tqpair=0x1f32750 00:25:55.536 [2024-11-20 07:23:19.659551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f32750) 00:25:55.536 [2024-11-20 07:23:19.659557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.536 [2024-11-20 07:23:19.659568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96d40, cid 4, qid 0 00:25:55.536 [2024-11-20 07:23:19.659615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.536 [2024-11-20 07:23:19.659619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.536 [2024-11-20 07:23:19.659621] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659622] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f32750): datao=0, datal=3072, cccid=4 00:25:55.536 [2024-11-20 07:23:19.659625] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f96d40) on tqpair(0x1f32750): expected_datao=0, payload_size=3072 00:25:55.536 [2024-11-20 07:23:19.659627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659631] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659634] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.536 [2024-11-20 07:23:19.659642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.536 [2024-11-20 07:23:19.659644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96d40) on tqpair=0x1f32750 00:25:55.536 [2024-11-20 07:23:19.659651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f32750) 00:25:55.536 [2024-11-20 07:23:19.659657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.536 [2024-11-20 07:23:19.659667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96d40, cid 4, qid 0 00:25:55.536 [2024-11-20 07:23:19.659716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.536 [2024-11-20 07:23:19.659720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.536 [2024-11-20 07:23:19.659722] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659724] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f32750): datao=0, datal=8, cccid=4 00:25:55.536 [2024-11-20 07:23:19.659726] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f96d40) on tqpair(0x1f32750): expected_datao=0, payload_size=8 00:25:55.536 [2024-11-20 07:23:19.659728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659733] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659734] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.536 [2024-11-20 07:23:19.659746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.536 [2024-11-20 07:23:19.659748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.536 [2024-11-20 07:23:19.659750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96d40) on tqpair=0x1f32750 00:25:55.536 ===================================================== 00:25:55.536 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:55.536 ===================================================== 00:25:55.536 Controller Capabilities/Features 00:25:55.536 ================================ 00:25:55.536 Vendor ID: 0000 00:25:55.536 Subsystem Vendor ID: 0000 00:25:55.536 Serial Number: .................... 00:25:55.536 Model Number: ........................................ 00:25:55.536 Firmware Version: 25.01 00:25:55.536 Recommended Arb Burst: 0 00:25:55.536 IEEE OUI Identifier: 00 00 00 00:25:55.536 Multi-path I/O 00:25:55.536 May have multiple subsystem ports: No 00:25:55.536 May have multiple controllers: No 00:25:55.536 Associated with SR-IOV VF: No 00:25:55.536 Max Data Transfer Size: 131072 00:25:55.536 Max Number of Namespaces: 0 00:25:55.536 Max Number of I/O Queues: 1024 00:25:55.536 NVMe Specification Version (VS): 1.3 00:25:55.536 NVMe Specification Version (Identify): 1.3 00:25:55.536 Maximum Queue Entries: 128 00:25:55.536 Contiguous Queues Required: Yes 00:25:55.536 Arbitration Mechanisms Supported 00:25:55.536 Weighted Round Robin: Not Supported 00:25:55.536 Vendor Specific: Not Supported 00:25:55.536 Reset Timeout: 15000 ms 00:25:55.536 Doorbell Stride: 4 bytes 00:25:55.536 NVM Subsystem Reset: Not Supported 00:25:55.536 Command Sets Supported 00:25:55.536 NVM Command Set: Supported 00:25:55.536 Boot Partition: Not Supported 00:25:55.536 Memory Page Size Minimum: 4096 bytes 00:25:55.536 Memory Page Size Maximum: 4096 bytes 00:25:55.536 Persistent Memory Region: Not Supported 00:25:55.536 Optional Asynchronous Events Supported 00:25:55.536 Namespace Attribute Notices: Not Supported 00:25:55.536 Firmware Activation Notices: Not Supported 00:25:55.536 ANA Change Notices: Not Supported 00:25:55.536 PLE Aggregate Log Change Notices: Not Supported 00:25:55.536 LBA Status Info Alert Notices: Not Supported 00:25:55.536 EGE Aggregate Log Change Notices: Not Supported 00:25:55.536 Normal NVM Subsystem Shutdown event: Not Supported 00:25:55.536 Zone Descriptor Change Notices: Not Supported 00:25:55.536 Discovery Log Change Notices: Supported 00:25:55.536 Controller Attributes 00:25:55.536 128-bit Host Identifier: Not Supported 00:25:55.536 Non-Operational Permissive Mode: Not Supported 00:25:55.536 NVM Sets: Not Supported 00:25:55.536 Read Recovery Levels: Not Supported 00:25:55.536 Endurance Groups: Not Supported 00:25:55.536 Predictable Latency Mode: Not Supported 00:25:55.536 Traffic Based Keep ALive: Not Supported 00:25:55.536 Namespace Granularity: Not Supported 00:25:55.536 SQ Associations: Not Supported 00:25:55.536 UUID List: Not Supported 00:25:55.536 Multi-Domain Subsystem: Not Supported 00:25:55.536 Fixed Capacity Management: Not Supported 00:25:55.536 Variable Capacity Management: Not Supported 00:25:55.536 Delete Endurance Group: Not Supported 00:25:55.536 Delete NVM Set: Not Supported 00:25:55.536 Extended LBA Formats Supported: Not Supported 00:25:55.536 Flexible Data Placement Supported: Not Supported 00:25:55.536 00:25:55.537 Controller Memory Buffer Support 00:25:55.537 ================================ 00:25:55.537 Supported: No 00:25:55.537 00:25:55.537 Persistent Memory Region Support 00:25:55.537 ================================ 00:25:55.537 Supported: No 00:25:55.537 00:25:55.537 Admin Command Set Attributes 00:25:55.537 ============================ 00:25:55.537 Security Send/Receive: Not Supported 00:25:55.537 Format NVM: Not Supported 00:25:55.537 Firmware Activate/Download: Not Supported 00:25:55.537 Namespace Management: Not Supported 00:25:55.537 Device Self-Test: Not Supported 00:25:55.537 Directives: Not Supported 00:25:55.537 NVMe-MI: Not Supported 00:25:55.537 Virtualization Management: Not Supported 00:25:55.537 Doorbell Buffer Config: Not Supported 00:25:55.537 Get LBA Status Capability: Not Supported 00:25:55.537 Command & Feature Lockdown Capability: Not Supported 00:25:55.537 Abort Command Limit: 1 00:25:55.537 Async Event Request Limit: 4 00:25:55.537 Number of Firmware Slots: N/A 00:25:55.537 Firmware Slot 1 Read-Only: N/A 00:25:55.537 Firmware Activation Without Reset: N/A 00:25:55.537 Multiple Update Detection Support: N/A 00:25:55.537 Firmware Update Granularity: No Information Provided 00:25:55.537 Per-Namespace SMART Log: No 00:25:55.537 Asymmetric Namespace Access Log Page: Not Supported 00:25:55.537 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:55.537 Command Effects Log Page: Not Supported 00:25:55.537 Get Log Page Extended Data: Supported 00:25:55.537 Telemetry Log Pages: Not Supported 00:25:55.537 Persistent Event Log Pages: Not Supported 00:25:55.537 Supported Log Pages Log Page: May Support 00:25:55.537 Commands Supported & Effects Log Page: Not Supported 00:25:55.537 Feature Identifiers & Effects Log Page:May Support 00:25:55.537 NVMe-MI Commands & Effects Log Page: May Support 00:25:55.537 Data Area 4 for Telemetry Log: Not Supported 00:25:55.537 Error Log Page Entries Supported: 128 00:25:55.537 Keep Alive: Not Supported 00:25:55.537 00:25:55.537 NVM Command Set Attributes 00:25:55.537 ========================== 00:25:55.537 Submission Queue Entry Size 00:25:55.537 Max: 1 00:25:55.537 Min: 1 00:25:55.537 Completion Queue Entry Size 00:25:55.537 Max: 1 00:25:55.537 Min: 1 00:25:55.537 Number of Namespaces: 0 00:25:55.537 Compare Command: Not Supported 00:25:55.537 Write Uncorrectable Command: Not Supported 00:25:55.537 Dataset Management Command: Not Supported 00:25:55.537 Write Zeroes Command: Not Supported 00:25:55.537 Set Features Save Field: Not Supported 00:25:55.537 Reservations: Not Supported 00:25:55.537 Timestamp: Not Supported 00:25:55.537 Copy: Not Supported 00:25:55.537 Volatile Write Cache: Not Present 00:25:55.537 Atomic Write Unit (Normal): 1 00:25:55.537 Atomic Write Unit (PFail): 1 00:25:55.537 Atomic Compare & Write Unit: 1 00:25:55.537 Fused Compare & Write: Supported 00:25:55.537 Scatter-Gather List 00:25:55.537 SGL Command Set: Supported 00:25:55.537 SGL Keyed: Supported 00:25:55.537 SGL Bit Bucket Descriptor: Not Supported 00:25:55.537 SGL Metadata Pointer: Not Supported 00:25:55.537 Oversized SGL: Not Supported 00:25:55.537 SGL Metadata Address: Not Supported 00:25:55.537 SGL Offset: Supported 00:25:55.537 Transport SGL Data Block: Not Supported 00:25:55.537 Replay Protected Memory Block: Not Supported 00:25:55.537 00:25:55.537 Firmware Slot Information 00:25:55.537 ========================= 00:25:55.537 Active slot: 0 00:25:55.537 00:25:55.537 00:25:55.537 Error Log 00:25:55.537 ========= 00:25:55.537 00:25:55.537 Active Namespaces 00:25:55.537 ================= 00:25:55.537 Discovery Log Page 00:25:55.537 ================== 00:25:55.537 Generation Counter: 2 00:25:55.537 Number of Records: 2 00:25:55.537 Record Format: 0 00:25:55.537 00:25:55.537 Discovery Log Entry 0 00:25:55.537 ---------------------- 00:25:55.537 Transport Type: 3 (TCP) 00:25:55.537 Address Family: 1 (IPv4) 00:25:55.537 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:55.537 Entry Flags: 00:25:55.537 Duplicate Returned Information: 1 00:25:55.537 Explicit Persistent Connection Support for Discovery: 1 00:25:55.537 Transport Requirements: 00:25:55.537 Secure Channel: Not Required 00:25:55.537 Port ID: 0 (0x0000) 00:25:55.537 Controller ID: 65535 (0xffff) 00:25:55.537 Admin Max SQ Size: 128 00:25:55.537 Transport Service Identifier: 4420 00:25:55.537 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:55.537 Transport Address: 10.0.0.2 00:25:55.537 Discovery Log Entry 1 00:25:55.537 ---------------------- 00:25:55.537 Transport Type: 3 (TCP) 00:25:55.537 Address Family: 1 (IPv4) 00:25:55.537 Subsystem Type: 2 (NVM Subsystem) 00:25:55.537 Entry Flags: 00:25:55.537 Duplicate Returned Information: 0 00:25:55.537 Explicit Persistent Connection Support for Discovery: 0 00:25:55.537 Transport Requirements: 00:25:55.537 Secure Channel: Not Required 00:25:55.537 Port ID: 0 (0x0000) 00:25:55.537 Controller ID: 65535 (0xffff) 00:25:55.537 Admin Max SQ Size: 128 00:25:55.537 Transport Service Identifier: 4420 00:25:55.537 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:55.537 Transport Address: 10.0.0.2 [2024-11-20 07:23:19.659807] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:55.537 [2024-11-20 07:23:19.659814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96740) on tqpair=0x1f32750 00:25:55.537 [2024-11-20 07:23:19.659818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.537 [2024-11-20 07:23:19.659821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f968c0) on tqpair=0x1f32750 00:25:55.537 [2024-11-20 07:23:19.659824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.537 [2024-11-20 07:23:19.659827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96a40) on tqpair=0x1f32750 00:25:55.537 [2024-11-20 07:23:19.659829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.537 [2024-11-20 07:23:19.659832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.537 [2024-11-20 07:23:19.659835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.537 [2024-11-20 07:23:19.659840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.537 [2024-11-20 07:23:19.659842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.537 [2024-11-20 07:23:19.659845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.537 [2024-11-20 07:23:19.659849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.537 [2024-11-20 07:23:19.659859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.537 [2024-11-20 07:23:19.659896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.537 [2024-11-20 07:23:19.659899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.537 [2024-11-20 07:23:19.659901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.537 [2024-11-20 07:23:19.659904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.537 [2024-11-20 07:23:19.659909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.537 [2024-11-20 07:23:19.659911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.537 [2024-11-20 07:23:19.659913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.537 [2024-11-20 07:23:19.659917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.537 [2024-11-20 07:23:19.659927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.537 [2024-11-20 07:23:19.659973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.537 [2024-11-20 07:23:19.659976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.537 [2024-11-20 07:23:19.659978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.537 [2024-11-20 07:23:19.659981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.537 [2024-11-20 07:23:19.659984] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:55.537 [2024-11-20 07:23:19.659987] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:55.537 [2024-11-20 07:23:19.659992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.537 [2024-11-20 07:23:19.659994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.537 [2024-11-20 07:23:19.659996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.537 [2024-11-20 07:23:19.660001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.537 [2024-11-20 07:23:19.660008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.537 [2024-11-20 07:23:19.660042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.538 [2024-11-20 07:23:19.660046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.538 [2024-11-20 07:23:19.660047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.538 [2024-11-20 07:23:19.660056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.538 [2024-11-20 07:23:19.660064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.538 [2024-11-20 07:23:19.660073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.538 [2024-11-20 07:23:19.660113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.538 [2024-11-20 07:23:19.660117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.538 [2024-11-20 07:23:19.660119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.538 [2024-11-20 07:23:19.660127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.538 [2024-11-20 07:23:19.660135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.538 [2024-11-20 07:23:19.660143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.538 [2024-11-20 07:23:19.660181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.538 [2024-11-20 07:23:19.660184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.538 [2024-11-20 07:23:19.660186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.538 [2024-11-20 07:23:19.660194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.538 [2024-11-20 07:23:19.660202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.538 [2024-11-20 07:23:19.660210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.538 [2024-11-20 07:23:19.660260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.538 [2024-11-20 07:23:19.660264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.538 [2024-11-20 07:23:19.660265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660268] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.538 [2024-11-20 07:23:19.660273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.538 [2024-11-20 07:23:19.660282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.538 [2024-11-20 07:23:19.660291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.538 [2024-11-20 07:23:19.660328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.538 [2024-11-20 07:23:19.660332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.538 [2024-11-20 07:23:19.660334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.538 [2024-11-20 07:23:19.660342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.538 [2024-11-20 07:23:19.660350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.538 [2024-11-20 07:23:19.660358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.538 [2024-11-20 07:23:19.660394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.538 [2024-11-20 07:23:19.660398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.538 [2024-11-20 07:23:19.660400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.538 [2024-11-20 07:23:19.660408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.538 [2024-11-20 07:23:19.660416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.538 [2024-11-20 07:23:19.660424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.538 [2024-11-20 07:23:19.660461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.538 [2024-11-20 07:23:19.660465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.538 [2024-11-20 07:23:19.660467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.538 [2024-11-20 07:23:19.660475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.538 [2024-11-20 07:23:19.660483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.538 [2024-11-20 07:23:19.660491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.538 [2024-11-20 07:23:19.660533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.538 [2024-11-20 07:23:19.660536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.538 [2024-11-20 07:23:19.660538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.538 [2024-11-20 07:23:19.660546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.538 [2024-11-20 07:23:19.660555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.538 [2024-11-20 07:23:19.660562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.538 [2024-11-20 07:23:19.660600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.538 [2024-11-20 07:23:19.660604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.538 [2024-11-20 07:23:19.660605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.538 [2024-11-20 07:23:19.660613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.538 [2024-11-20 07:23:19.660622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.538 [2024-11-20 07:23:19.660630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.538 [2024-11-20 07:23:19.660665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.538 [2024-11-20 07:23:19.660668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.538 [2024-11-20 07:23:19.660670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.538 [2024-11-20 07:23:19.660686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.538 [2024-11-20 07:23:19.660690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.538 [2024-11-20 07:23:19.660695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.538 [2024-11-20 07:23:19.660703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.538 [2024-11-20 07:23:19.660740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.539 [2024-11-20 07:23:19.660744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.539 [2024-11-20 07:23:19.660746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.660748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.539 [2024-11-20 07:23:19.660754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.660756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.660758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.539 [2024-11-20 07:23:19.660763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-11-20 07:23:19.660770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.539 [2024-11-20 07:23:19.660808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.539 [2024-11-20 07:23:19.660811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.539 [2024-11-20 07:23:19.660813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.660815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.539 [2024-11-20 07:23:19.660821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.660823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.660825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.539 [2024-11-20 07:23:19.660830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-11-20 07:23:19.660838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.539 [2024-11-20 07:23:19.660875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.539 [2024-11-20 07:23:19.660879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.539 [2024-11-20 07:23:19.660881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.660883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.539 [2024-11-20 07:23:19.660889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.660891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.660893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.539 [2024-11-20 07:23:19.660898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-11-20 07:23:19.660905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.539 [2024-11-20 07:23:19.660941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.539 [2024-11-20 07:23:19.660944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.539 [2024-11-20 07:23:19.660946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.660948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.539 [2024-11-20 07:23:19.660954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.660956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.660958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.539 [2024-11-20 07:23:19.660963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-11-20 07:23:19.660971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.539 [2024-11-20 07:23:19.661009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.539 [2024-11-20 07:23:19.661012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.539 [2024-11-20 07:23:19.661014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.539 [2024-11-20 07:23:19.661023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.539 [2024-11-20 07:23:19.661031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-11-20 07:23:19.661039] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.539 [2024-11-20 07:23:19.661072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.539 [2024-11-20 07:23:19.661076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.539 [2024-11-20 07:23:19.661077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.539 [2024-11-20 07:23:19.661086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.539 [2024-11-20 07:23:19.661094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-11-20 07:23:19.661102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.539 [2024-11-20 07:23:19.661140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.539 [2024-11-20 07:23:19.661143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.539 [2024-11-20 07:23:19.661145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.539 [2024-11-20 07:23:19.661154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661158] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.539 [2024-11-20 07:23:19.661162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-11-20 07:23:19.661170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.539 [2024-11-20 07:23:19.661208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.539 [2024-11-20 07:23:19.661212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.539 [2024-11-20 07:23:19.661214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.539 [2024-11-20 07:23:19.661228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.539 [2024-11-20 07:23:19.661237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-11-20 07:23:19.661245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.539 [2024-11-20 07:23:19.661289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.539 [2024-11-20 07:23:19.661293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.539 [2024-11-20 07:23:19.661295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.539 [2024-11-20 07:23:19.661303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.539 [2024-11-20 07:23:19.661311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-11-20 07:23:19.661319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.539 [2024-11-20 07:23:19.661357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.539 [2024-11-20 07:23:19.661365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.539 [2024-11-20 07:23:19.661367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.539 [2024-11-20 07:23:19.661375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.539 [2024-11-20 07:23:19.661384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-11-20 07:23:19.661392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.539 [2024-11-20 07:23:19.661426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.539 [2024-11-20 07:23:19.661431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.539 [2024-11-20 07:23:19.661433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.539 [2024-11-20 07:23:19.661442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.539 [2024-11-20 07:23:19.661450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-11-20 07:23:19.661458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.539 [2024-11-20 07:23:19.661496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.539 [2024-11-20 07:23:19.661500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.539 [2024-11-20 07:23:19.661502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.539 [2024-11-20 07:23:19.661510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.539 [2024-11-20 07:23:19.661514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.540 [2024-11-20 07:23:19.661519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-11-20 07:23:19.661526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.540 [2024-11-20 07:23:19.661560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.540 [2024-11-20 07:23:19.661564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.540 [2024-11-20 07:23:19.661566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.540 [2024-11-20 07:23:19.661575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.540 [2024-11-20 07:23:19.661583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-11-20 07:23:19.661591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.540 [2024-11-20 07:23:19.661631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.540 [2024-11-20 07:23:19.661635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.540 [2024-11-20 07:23:19.661637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.540 [2024-11-20 07:23:19.661645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.540 [2024-11-20 07:23:19.661654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-11-20 07:23:19.661662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.540 [2024-11-20 07:23:19.661701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.540 [2024-11-20 07:23:19.661705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.540 [2024-11-20 07:23:19.661707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.540 [2024-11-20 07:23:19.661715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.540 [2024-11-20 07:23:19.661723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-11-20 07:23:19.661731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.540 [2024-11-20 07:23:19.661767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.540 [2024-11-20 07:23:19.661771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.540 [2024-11-20 07:23:19.661772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.540 [2024-11-20 07:23:19.661781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.540 [2024-11-20 07:23:19.661789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-11-20 07:23:19.661797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.540 [2024-11-20 07:23:19.661831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.540 [2024-11-20 07:23:19.661835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.540 [2024-11-20 07:23:19.661836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.540 [2024-11-20 07:23:19.661845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.540 [2024-11-20 07:23:19.661853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-11-20 07:23:19.661861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.540 [2024-11-20 07:23:19.661899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.540 [2024-11-20 07:23:19.661902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.540 [2024-11-20 07:23:19.661904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.540 [2024-11-20 07:23:19.661913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.540 [2024-11-20 07:23:19.661921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-11-20 07:23:19.661929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.540 [2024-11-20 07:23:19.661972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.540 [2024-11-20 07:23:19.661976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.540 [2024-11-20 07:23:19.661978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.540 [2024-11-20 07:23:19.661986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.661990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.540 [2024-11-20 07:23:19.661995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-11-20 07:23:19.662003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.540 [2024-11-20 07:23:19.662036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.540 [2024-11-20 07:23:19.662040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.540 [2024-11-20 07:23:19.662042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.662044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.540 [2024-11-20 07:23:19.662050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.662052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.662054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.540 [2024-11-20 07:23:19.662059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-11-20 07:23:19.662066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.540 [2024-11-20 07:23:19.662108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.540 [2024-11-20 07:23:19.662111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.540 [2024-11-20 07:23:19.662113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.662115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.540 [2024-11-20 07:23:19.662121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.662123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.662125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.540 [2024-11-20 07:23:19.662130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-11-20 07:23:19.662138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.540 [2024-11-20 07:23:19.662175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.540 [2024-11-20 07:23:19.662179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.540 [2024-11-20 07:23:19.662180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.662183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.540 [2024-11-20 07:23:19.662189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.662191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.662193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.540 [2024-11-20 07:23:19.662197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-11-20 07:23:19.662205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.540 [2024-11-20 07:23:19.666230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.540 [2024-11-20 07:23:19.666241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.540 [2024-11-20 07:23:19.666244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.666246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.540 [2024-11-20 07:23:19.666252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.666255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.666257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f32750) 00:25:55.540 [2024-11-20 07:23:19.666261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-11-20 07:23:19.666273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f96bc0, cid 3, qid 0 00:25:55.540 [2024-11-20 07:23:19.666310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.540 [2024-11-20 07:23:19.666314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.540 [2024-11-20 07:23:19.666315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.540 [2024-11-20 07:23:19.666318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f96bc0) on tqpair=0x1f32750 00:25:55.540 [2024-11-20 07:23:19.666322] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:25:55.541 00:25:55.541 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:55.541 [2024-11-20 07:23:19.695436] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:55.541 [2024-11-20 07:23:19.695461] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72663 ] 00:25:55.805 [2024-11-20 07:23:19.839385] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:55.805 [2024-11-20 07:23:19.839440] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:55.805 [2024-11-20 07:23:19.839443] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:55.805 [2024-11-20 07:23:19.839454] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:55.805 [2024-11-20 07:23:19.839461] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:55.805 [2024-11-20 07:23:19.839699] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:55.805 [2024-11-20 07:23:19.839732] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2414750 0 00:25:55.805 [2024-11-20 07:23:19.847235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:55.805 [2024-11-20 07:23:19.847249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:55.805 [2024-11-20 07:23:19.847252] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:55.805 [2024-11-20 07:23:19.847254] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:55.805 [2024-11-20 07:23:19.847274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.805 [2024-11-20 07:23:19.847277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.805 [2024-11-20 07:23:19.847279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2414750) 00:25:55.805 [2024-11-20 07:23:19.847290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:55.805 [2024-11-20 07:23:19.847309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478740, cid 0, qid 0 00:25:55.805 [2024-11-20 07:23:19.855234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.805 [2024-11-20 07:23:19.855247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.805 [2024-11-20 07:23:19.855250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.805 [2024-11-20 07:23:19.855252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478740) on tqpair=0x2414750 00:25:55.805 [2024-11-20 07:23:19.855260] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:55.805 [2024-11-20 07:23:19.855265] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:55.805 [2024-11-20 07:23:19.855268] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:55.805 [2024-11-20 07:23:19.855278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.805 [2024-11-20 07:23:19.855281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.805 [2024-11-20 07:23:19.855283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2414750) 00:25:55.805 [2024-11-20 07:23:19.855288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.805 [2024-11-20 07:23:19.855302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478740, cid 0, qid 0 00:25:55.805 [2024-11-20 07:23:19.855343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.805 [2024-11-20 07:23:19.855347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.805 [2024-11-20 07:23:19.855348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.805 [2024-11-20 07:23:19.855351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478740) on tqpair=0x2414750 00:25:55.805 [2024-11-20 07:23:19.855354] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:55.805 [2024-11-20 07:23:19.855358] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:55.805 [2024-11-20 07:23:19.855362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.805 [2024-11-20 07:23:19.855364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.805 [2024-11-20 07:23:19.855366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2414750) 00:25:55.805 [2024-11-20 07:23:19.855371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.805 [2024-11-20 07:23:19.855379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478740, cid 0, qid 0 00:25:55.805 [2024-11-20 07:23:19.855419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.805 [2024-11-20 07:23:19.855423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.805 [2024-11-20 07:23:19.855424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.805 [2024-11-20 07:23:19.855427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478740) on tqpair=0x2414750 00:25:55.805 [2024-11-20 07:23:19.855430] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:55.805 [2024-11-20 07:23:19.855435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:55.805 [2024-11-20 07:23:19.855439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.805 [2024-11-20 07:23:19.855441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.805 [2024-11-20 07:23:19.855443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2414750) 00:25:55.805 [2024-11-20 07:23:19.855447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.805 [2024-11-20 07:23:19.855455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478740, cid 0, qid 0 00:25:55.805 [2024-11-20 07:23:19.855491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.805 [2024-11-20 07:23:19.855495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.806 [2024-11-20 07:23:19.855497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478740) on tqpair=0x2414750 00:25:55.806 [2024-11-20 07:23:19.855502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:55.806 [2024-11-20 07:23:19.855508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2414750) 00:25:55.806 [2024-11-20 07:23:19.855516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.806 [2024-11-20 07:23:19.855524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478740, cid 0, qid 0 00:25:55.806 [2024-11-20 07:23:19.855564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.806 [2024-11-20 07:23:19.855568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.806 [2024-11-20 07:23:19.855569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478740) on tqpair=0x2414750 00:25:55.806 [2024-11-20 07:23:19.855575] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:55.806 [2024-11-20 07:23:19.855578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:55.806 [2024-11-20 07:23:19.855582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:55.806 [2024-11-20 07:23:19.855685] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:55.806 [2024-11-20 07:23:19.855693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:55.806 [2024-11-20 07:23:19.855699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2414750) 00:25:55.806 [2024-11-20 07:23:19.855707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.806 [2024-11-20 07:23:19.855716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478740, cid 0, qid 0 00:25:55.806 [2024-11-20 07:23:19.855755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.806 [2024-11-20 07:23:19.855758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.806 [2024-11-20 07:23:19.855760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478740) on tqpair=0x2414750 00:25:55.806 [2024-11-20 07:23:19.855765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:55.806 [2024-11-20 07:23:19.855771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2414750) 00:25:55.806 [2024-11-20 07:23:19.855779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.806 [2024-11-20 07:23:19.855787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478740, cid 0, qid 0 00:25:55.806 [2024-11-20 07:23:19.855821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.806 [2024-11-20 07:23:19.855829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.806 [2024-11-20 07:23:19.855831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478740) on tqpair=0x2414750 00:25:55.806 [2024-11-20 07:23:19.855836] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:55.806 [2024-11-20 07:23:19.855839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:55.806 [2024-11-20 07:23:19.855844] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:55.806 [2024-11-20 07:23:19.855849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:55.806 [2024-11-20 07:23:19.855855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2414750) 00:25:55.806 [2024-11-20 07:23:19.855862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.806 [2024-11-20 07:23:19.855871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478740, cid 0, qid 0 00:25:55.806 [2024-11-20 07:23:19.855942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.806 [2024-11-20 07:23:19.855950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.806 [2024-11-20 07:23:19.855953] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855955] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2414750): datao=0, datal=4096, cccid=0 00:25:55.806 [2024-11-20 07:23:19.855958] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2478740) on tqpair(0x2414750): expected_datao=0, payload_size=4096 00:25:55.806 [2024-11-20 07:23:19.855961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855966] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855969] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.806 [2024-11-20 07:23:19.855978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.806 [2024-11-20 07:23:19.855979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.855982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478740) on tqpair=0x2414750 00:25:55.806 [2024-11-20 07:23:19.855986] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:55.806 [2024-11-20 07:23:19.855989] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:55.806 [2024-11-20 07:23:19.855992] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:55.806 [2024-11-20 07:23:19.855994] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:55.806 [2024-11-20 07:23:19.855997] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:55.806 [2024-11-20 07:23:19.855999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:55.806 [2024-11-20 07:23:19.856006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:55.806 [2024-11-20 07:23:19.856010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.856013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.856015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2414750) 00:25:55.806 [2024-11-20 07:23:19.856019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:55.806 [2024-11-20 07:23:19.856028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478740, cid 0, qid 0 00:25:55.806 [2024-11-20 07:23:19.856067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.806 [2024-11-20 07:23:19.856074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.806 [2024-11-20 07:23:19.856076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.806 [2024-11-20 07:23:19.856078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478740) on tqpair=0x2414750 00:25:55.806 [2024-11-20 07:23:19.856083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2414750) 00:25:55.807 [2024-11-20 07:23:19.856091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.807 [2024-11-20 07:23:19.856095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2414750) 00:25:55.807 [2024-11-20 07:23:19.856103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.807 [2024-11-20 07:23:19.856107] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2414750) 00:25:55.807 [2024-11-20 07:23:19.856115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.807 [2024-11-20 07:23:19.856118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.807 [2024-11-20 07:23:19.856126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.807 [2024-11-20 07:23:19.856129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:55.807 [2024-11-20 07:23:19.856135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:55.807 [2024-11-20 07:23:19.856140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2414750) 00:25:55.807 [2024-11-20 07:23:19.856146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.807 [2024-11-20 07:23:19.856156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478740, cid 0, qid 0 00:25:55.807 [2024-11-20 07:23:19.856163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24788c0, cid 1, qid 0 00:25:55.807 [2024-11-20 07:23:19.856166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478a40, cid 2, qid 0 00:25:55.807 [2024-11-20 07:23:19.856169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.807 [2024-11-20 07:23:19.856172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478d40, cid 4, qid 0 00:25:55.807 [2024-11-20 07:23:19.856266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.807 [2024-11-20 07:23:19.856275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.807 [2024-11-20 07:23:19.856277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478d40) on tqpair=0x2414750 00:25:55.807 [2024-11-20 07:23:19.856282] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:55.807 [2024-11-20 07:23:19.856285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:55.807 [2024-11-20 07:23:19.856291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:55.807 [2024-11-20 07:23:19.856297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:55.807 [2024-11-20 07:23:19.856300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2414750) 00:25:55.807 [2024-11-20 07:23:19.856309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:55.807 [2024-11-20 07:23:19.856318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478d40, cid 4, qid 0 00:25:55.807 [2024-11-20 07:23:19.856354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.807 [2024-11-20 07:23:19.856361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.807 [2024-11-20 07:23:19.856363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478d40) on tqpair=0x2414750 00:25:55.807 [2024-11-20 07:23:19.856414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:55.807 [2024-11-20 07:23:19.856422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:55.807 [2024-11-20 07:23:19.856427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2414750) 00:25:55.807 [2024-11-20 07:23:19.856433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.807 [2024-11-20 07:23:19.856442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478d40, cid 4, qid 0 00:25:55.807 [2024-11-20 07:23:19.856493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.807 [2024-11-20 07:23:19.856499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.807 [2024-11-20 07:23:19.856501] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856504] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2414750): datao=0, datal=4096, cccid=4 00:25:55.807 [2024-11-20 07:23:19.856506] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2478d40) on tqpair(0x2414750): expected_datao=0, payload_size=4096 00:25:55.807 [2024-11-20 07:23:19.856508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856513] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856516] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.807 [2024-11-20 07:23:19.856524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.807 [2024-11-20 07:23:19.856526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478d40) on tqpair=0x2414750 00:25:55.807 [2024-11-20 07:23:19.856537] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:55.807 [2024-11-20 07:23:19.856542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:55.807 [2024-11-20 07:23:19.856548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:55.807 [2024-11-20 07:23:19.856552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2414750) 00:25:55.807 [2024-11-20 07:23:19.856558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.807 [2024-11-20 07:23:19.856567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478d40, cid 4, qid 0 00:25:55.807 [2024-11-20 07:23:19.856655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.807 [2024-11-20 07:23:19.856662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.807 [2024-11-20 07:23:19.856664] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856666] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2414750): datao=0, datal=4096, cccid=4 00:25:55.807 [2024-11-20 07:23:19.856669] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2478d40) on tqpair(0x2414750): expected_datao=0, payload_size=4096 00:25:55.807 [2024-11-20 07:23:19.856671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856683] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856685] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.807 [2024-11-20 07:23:19.856694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.807 [2024-11-20 07:23:19.856696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478d40) on tqpair=0x2414750 00:25:55.807 [2024-11-20 07:23:19.856707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:55.807 [2024-11-20 07:23:19.856713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:55.807 [2024-11-20 07:23:19.856717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2414750) 00:25:55.807 [2024-11-20 07:23:19.856724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.807 [2024-11-20 07:23:19.856733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478d40, cid 4, qid 0 00:25:55.807 [2024-11-20 07:23:19.856781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.807 [2024-11-20 07:23:19.856788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.807 [2024-11-20 07:23:19.856790] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856792] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2414750): datao=0, datal=4096, cccid=4 00:25:55.807 [2024-11-20 07:23:19.856794] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2478d40) on tqpair(0x2414750): expected_datao=0, payload_size=4096 00:25:55.807 [2024-11-20 07:23:19.856797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856801] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856803] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.807 [2024-11-20 07:23:19.856812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.807 [2024-11-20 07:23:19.856814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.807 [2024-11-20 07:23:19.856816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478d40) on tqpair=0x2414750 00:25:55.808 [2024-11-20 07:23:19.856820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:55.808 [2024-11-20 07:23:19.856825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:55.808 [2024-11-20 07:23:19.856833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:55.808 [2024-11-20 07:23:19.856836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:55.808 [2024-11-20 07:23:19.856840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:55.808 [2024-11-20 07:23:19.856843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:55.808 [2024-11-20 07:23:19.856846] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:55.808 [2024-11-20 07:23:19.856849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:55.808 [2024-11-20 07:23:19.856852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:55.808 [2024-11-20 07:23:19.856864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.856866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2414750) 00:25:55.808 [2024-11-20 07:23:19.856870] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.808 [2024-11-20 07:23:19.856875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.856877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.856879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2414750) 00:25:55.808 [2024-11-20 07:23:19.856883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.808 [2024-11-20 07:23:19.856895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478d40, cid 4, qid 0 00:25:55.808 [2024-11-20 07:23:19.856899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478ec0, cid 5, qid 0 00:25:55.808 [2024-11-20 07:23:19.856953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.808 [2024-11-20 07:23:19.856960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.808 [2024-11-20 07:23:19.856962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.856964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478d40) on tqpair=0x2414750 00:25:55.808 [2024-11-20 07:23:19.856969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.808 [2024-11-20 07:23:19.856973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.808 [2024-11-20 07:23:19.856975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.856977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478ec0) on tqpair=0x2414750 00:25:55.808 [2024-11-20 07:23:19.856982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.856985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2414750) 00:25:55.808 [2024-11-20 07:23:19.856989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.808 [2024-11-20 07:23:19.856997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478ec0, cid 5, qid 0 00:25:55.808 [2024-11-20 07:23:19.857036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.808 [2024-11-20 07:23:19.857043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.808 [2024-11-20 07:23:19.857045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.857047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478ec0) on tqpair=0x2414750 00:25:55.808 [2024-11-20 07:23:19.857053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.857055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2414750) 00:25:55.808 [2024-11-20 07:23:19.857060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.808 [2024-11-20 07:23:19.857068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478ec0, cid 5, qid 0 00:25:55.808 [2024-11-20 07:23:19.857102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.808 [2024-11-20 07:23:19.857109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.808 [2024-11-20 07:23:19.857111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.857113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478ec0) on tqpair=0x2414750 00:25:55.808 [2024-11-20 07:23:19.857119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.857121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2414750) 00:25:55.808 [2024-11-20 07:23:19.857125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.808 [2024-11-20 07:23:19.857133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478ec0, cid 5, qid 0 00:25:55.808 [2024-11-20 07:23:19.857170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.808 [2024-11-20 07:23:19.857177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.808 [2024-11-20 07:23:19.857179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.857181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478ec0) on tqpair=0x2414750 00:25:55.808 [2024-11-20 07:23:19.857191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.857193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2414750) 00:25:55.808 [2024-11-20 07:23:19.857197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.808 [2024-11-20 07:23:19.857202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.857204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2414750) 00:25:55.808 [2024-11-20 07:23:19.857208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.808 [2024-11-20 07:23:19.857213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.857215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2414750) 00:25:55.808 [2024-11-20 07:23:19.857219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.808 [2024-11-20 07:23:19.857231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.857233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2414750) 00:25:55.808 [2024-11-20 07:23:19.857237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.808 [2024-11-20 07:23:19.857247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478ec0, cid 5, qid 0 00:25:55.808 [2024-11-20 07:23:19.857251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478d40, cid 4, qid 0 00:25:55.808 [2024-11-20 07:23:19.857253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2479040, cid 6, qid 0 00:25:55.808 [2024-11-20 07:23:19.857256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24791c0, cid 7, qid 0 00:25:55.808 [2024-11-20 07:23:19.857379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.808 [2024-11-20 07:23:19.857388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.808 [2024-11-20 07:23:19.857390] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.857392] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2414750): datao=0, datal=8192, cccid=5 00:25:55.808 [2024-11-20 07:23:19.857394] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2478ec0) on tqpair(0x2414750): expected_datao=0, payload_size=8192 00:25:55.808 [2024-11-20 07:23:19.857397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.857407] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.808 [2024-11-20 07:23:19.857410] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.809 [2024-11-20 07:23:19.857417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.809 [2024-11-20 07:23:19.857419] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857421] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2414750): datao=0, datal=512, cccid=4 00:25:55.809 [2024-11-20 07:23:19.857423] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2478d40) on tqpair(0x2414750): expected_datao=0, payload_size=512 00:25:55.809 [2024-11-20 07:23:19.857425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857430] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857431] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.809 [2024-11-20 07:23:19.857439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.809 [2024-11-20 07:23:19.857441] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857443] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2414750): datao=0, datal=512, cccid=6 00:25:55.809 [2024-11-20 07:23:19.857445] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2479040) on tqpair(0x2414750): expected_datao=0, payload_size=512 00:25:55.809 [2024-11-20 07:23:19.857447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857451] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857453] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:55.809 [2024-11-20 07:23:19.857460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:55.809 [2024-11-20 07:23:19.857462] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857464] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2414750): datao=0, datal=4096, cccid=7 00:25:55.809 [2024-11-20 07:23:19.857466] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24791c0) on tqpair(0x2414750): expected_datao=0, payload_size=4096 00:25:55.809 [2024-11-20 07:23:19.857469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857473] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857475] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.809 [2024-11-20 07:23:19.857484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.809 [2024-11-20 07:23:19.857486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478ec0) on tqpair=0x2414750 00:25:55.809 [2024-11-20 07:23:19.857497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.809 [2024-11-20 07:23:19.857501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.809 [2024-11-20 07:23:19.857503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478d40) on tqpair=0x2414750 00:25:55.809 [2024-11-20 07:23:19.857513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.809 [2024-11-20 07:23:19.857516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.809 [2024-11-20 07:23:19.857518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2479040) on tqpair=0x2414750 00:25:55.809 [2024-11-20 07:23:19.857525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.809 [2024-11-20 07:23:19.857528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.809 [2024-11-20 07:23:19.857530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.809 [2024-11-20 07:23:19.857532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24791c0) on tqpair=0x2414750 00:25:55.809 ===================================================== 00:25:55.809 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:55.809 ===================================================== 00:25:55.809 Controller Capabilities/Features 00:25:55.809 ================================ 00:25:55.809 Vendor ID: 8086 00:25:55.809 Subsystem Vendor ID: 8086 00:25:55.809 Serial Number: SPDK00000000000001 00:25:55.809 Model Number: SPDK bdev Controller 00:25:55.809 Firmware Version: 25.01 00:25:55.809 Recommended Arb Burst: 6 00:25:55.809 IEEE OUI Identifier: e4 d2 5c 00:25:55.809 Multi-path I/O 00:25:55.809 May have multiple subsystem ports: Yes 00:25:55.809 May have multiple controllers: Yes 00:25:55.809 Associated with SR-IOV VF: No 00:25:55.809 Max Data Transfer Size: 131072 00:25:55.809 Max Number of Namespaces: 32 00:25:55.809 Max Number of I/O Queues: 127 00:25:55.809 NVMe Specification Version (VS): 1.3 00:25:55.809 NVMe Specification Version (Identify): 1.3 00:25:55.809 Maximum Queue Entries: 128 00:25:55.809 Contiguous Queues Required: Yes 00:25:55.809 Arbitration Mechanisms Supported 00:25:55.809 Weighted Round Robin: Not Supported 00:25:55.809 Vendor Specific: Not Supported 00:25:55.809 Reset Timeout: 15000 ms 00:25:55.809 Doorbell Stride: 4 bytes 00:25:55.809 NVM Subsystem Reset: Not Supported 00:25:55.809 Command Sets Supported 00:25:55.809 NVM Command Set: Supported 00:25:55.809 Boot Partition: Not Supported 00:25:55.809 Memory Page Size Minimum: 4096 bytes 00:25:55.809 Memory Page Size Maximum: 4096 bytes 00:25:55.809 Persistent Memory Region: Not Supported 00:25:55.809 Optional Asynchronous Events Supported 00:25:55.809 Namespace Attribute Notices: Supported 00:25:55.809 Firmware Activation Notices: Not Supported 00:25:55.809 ANA Change Notices: Not Supported 00:25:55.809 PLE Aggregate Log Change Notices: Not Supported 00:25:55.809 LBA Status Info Alert Notices: Not Supported 00:25:55.809 EGE Aggregate Log Change Notices: Not Supported 00:25:55.809 Normal NVM Subsystem Shutdown event: Not Supported 00:25:55.809 Zone Descriptor Change Notices: Not Supported 00:25:55.809 Discovery Log Change Notices: Not Supported 00:25:55.809 Controller Attributes 00:25:55.809 128-bit Host Identifier: Supported 00:25:55.809 Non-Operational Permissive Mode: Not Supported 00:25:55.809 NVM Sets: Not Supported 00:25:55.809 Read Recovery Levels: Not Supported 00:25:55.809 Endurance Groups: Not Supported 00:25:55.809 Predictable Latency Mode: Not Supported 00:25:55.809 Traffic Based Keep ALive: Not Supported 00:25:55.809 Namespace Granularity: Not Supported 00:25:55.809 SQ Associations: Not Supported 00:25:55.809 UUID List: Not Supported 00:25:55.809 Multi-Domain Subsystem: Not Supported 00:25:55.809 Fixed Capacity Management: Not Supported 00:25:55.809 Variable Capacity Management: Not Supported 00:25:55.809 Delete Endurance Group: Not Supported 00:25:55.809 Delete NVM Set: Not Supported 00:25:55.809 Extended LBA Formats Supported: Not Supported 00:25:55.809 Flexible Data Placement Supported: Not Supported 00:25:55.809 00:25:55.809 Controller Memory Buffer Support 00:25:55.809 ================================ 00:25:55.809 Supported: No 00:25:55.809 00:25:55.809 Persistent Memory Region Support 00:25:55.809 ================================ 00:25:55.809 Supported: No 00:25:55.809 00:25:55.809 Admin Command Set Attributes 00:25:55.809 ============================ 00:25:55.809 Security Send/Receive: Not Supported 00:25:55.809 Format NVM: Not Supported 00:25:55.809 Firmware Activate/Download: Not Supported 00:25:55.809 Namespace Management: Not Supported 00:25:55.809 Device Self-Test: Not Supported 00:25:55.809 Directives: Not Supported 00:25:55.809 NVMe-MI: Not Supported 00:25:55.809 Virtualization Management: Not Supported 00:25:55.809 Doorbell Buffer Config: Not Supported 00:25:55.809 Get LBA Status Capability: Not Supported 00:25:55.809 Command & Feature Lockdown Capability: Not Supported 00:25:55.809 Abort Command Limit: 4 00:25:55.809 Async Event Request Limit: 4 00:25:55.809 Number of Firmware Slots: N/A 00:25:55.809 Firmware Slot 1 Read-Only: N/A 00:25:55.809 Firmware Activation Without Reset: N/A 00:25:55.809 Multiple Update Detection Support: N/A 00:25:55.809 Firmware Update Granularity: No Information Provided 00:25:55.809 Per-Namespace SMART Log: No 00:25:55.809 Asymmetric Namespace Access Log Page: Not Supported 00:25:55.809 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:55.809 Command Effects Log Page: Supported 00:25:55.809 Get Log Page Extended Data: Supported 00:25:55.809 Telemetry Log Pages: Not Supported 00:25:55.809 Persistent Event Log Pages: Not Supported 00:25:55.809 Supported Log Pages Log Page: May Support 00:25:55.809 Commands Supported & Effects Log Page: Not Supported 00:25:55.809 Feature Identifiers & Effects Log Page:May Support 00:25:55.809 NVMe-MI Commands & Effects Log Page: May Support 00:25:55.809 Data Area 4 for Telemetry Log: Not Supported 00:25:55.809 Error Log Page Entries Supported: 128 00:25:55.809 Keep Alive: Supported 00:25:55.809 Keep Alive Granularity: 10000 ms 00:25:55.809 00:25:55.809 NVM Command Set Attributes 00:25:55.809 ========================== 00:25:55.809 Submission Queue Entry Size 00:25:55.809 Max: 64 00:25:55.809 Min: 64 00:25:55.810 Completion Queue Entry Size 00:25:55.810 Max: 16 00:25:55.810 Min: 16 00:25:55.810 Number of Namespaces: 32 00:25:55.810 Compare Command: Supported 00:25:55.810 Write Uncorrectable Command: Not Supported 00:25:55.810 Dataset Management Command: Supported 00:25:55.810 Write Zeroes Command: Supported 00:25:55.810 Set Features Save Field: Not Supported 00:25:55.810 Reservations: Supported 00:25:55.810 Timestamp: Not Supported 00:25:55.810 Copy: Supported 00:25:55.810 Volatile Write Cache: Present 00:25:55.810 Atomic Write Unit (Normal): 1 00:25:55.810 Atomic Write Unit (PFail): 1 00:25:55.810 Atomic Compare & Write Unit: 1 00:25:55.810 Fused Compare & Write: Supported 00:25:55.810 Scatter-Gather List 00:25:55.810 SGL Command Set: Supported 00:25:55.810 SGL Keyed: Supported 00:25:55.810 SGL Bit Bucket Descriptor: Not Supported 00:25:55.810 SGL Metadata Pointer: Not Supported 00:25:55.810 Oversized SGL: Not Supported 00:25:55.810 SGL Metadata Address: Not Supported 00:25:55.810 SGL Offset: Supported 00:25:55.810 Transport SGL Data Block: Not Supported 00:25:55.810 Replay Protected Memory Block: Not Supported 00:25:55.810 00:25:55.810 Firmware Slot Information 00:25:55.810 ========================= 00:25:55.810 Active slot: 1 00:25:55.810 Slot 1 Firmware Revision: 25.01 00:25:55.810 00:25:55.810 00:25:55.810 Commands Supported and Effects 00:25:55.810 ============================== 00:25:55.810 Admin Commands 00:25:55.810 -------------- 00:25:55.810 Get Log Page (02h): Supported 00:25:55.810 Identify (06h): Supported 00:25:55.810 Abort (08h): Supported 00:25:55.810 Set Features (09h): Supported 00:25:55.810 Get Features (0Ah): Supported 00:25:55.810 Asynchronous Event Request (0Ch): Supported 00:25:55.810 Keep Alive (18h): Supported 00:25:55.810 I/O Commands 00:25:55.810 ------------ 00:25:55.810 Flush (00h): Supported LBA-Change 00:25:55.810 Write (01h): Supported LBA-Change 00:25:55.810 Read (02h): Supported 00:25:55.810 Compare (05h): Supported 00:25:55.810 Write Zeroes (08h): Supported LBA-Change 00:25:55.810 Dataset Management (09h): Supported LBA-Change 00:25:55.810 Copy (19h): Supported LBA-Change 00:25:55.810 00:25:55.810 Error Log 00:25:55.810 ========= 00:25:55.810 00:25:55.810 Arbitration 00:25:55.810 =========== 00:25:55.810 Arbitration Burst: 1 00:25:55.810 00:25:55.810 Power Management 00:25:55.810 ================ 00:25:55.810 Number of Power States: 1 00:25:55.810 Current Power State: Power State #0 00:25:55.810 Power State #0: 00:25:55.810 Max Power: 0.00 W 00:25:55.810 Non-Operational State: Operational 00:25:55.810 Entry Latency: Not Reported 00:25:55.810 Exit Latency: Not Reported 00:25:55.810 Relative Read Throughput: 0 00:25:55.810 Relative Read Latency: 0 00:25:55.810 Relative Write Throughput: 0 00:25:55.810 Relative Write Latency: 0 00:25:55.810 Idle Power: Not Reported 00:25:55.810 Active Power: Not Reported 00:25:55.810 Non-Operational Permissive Mode: Not Supported 00:25:55.810 00:25:55.810 Health Information 00:25:55.810 ================== 00:25:55.810 Critical Warnings: 00:25:55.810 Available Spare Space: OK 00:25:55.810 Temperature: OK 00:25:55.810 Device Reliability: OK 00:25:55.810 Read Only: No 00:25:55.810 Volatile Memory Backup: OK 00:25:55.810 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:55.810 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:55.810 Available Spare: 0% 00:25:55.810 Available Spare Threshold: 0% 00:25:55.810 Life Percentage Used:[2024-11-20 07:23:19.857610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.810 [2024-11-20 07:23:19.857613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2414750) 00:25:55.810 [2024-11-20 07:23:19.857618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.810 [2024-11-20 07:23:19.857628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24791c0, cid 7, qid 0 00:25:55.810 [2024-11-20 07:23:19.857663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.810 [2024-11-20 07:23:19.857667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.810 [2024-11-20 07:23:19.857669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.810 [2024-11-20 07:23:19.857671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24791c0) on tqpair=0x2414750 00:25:55.810 [2024-11-20 07:23:19.857692] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:55.810 [2024-11-20 07:23:19.857701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478740) on tqpair=0x2414750 00:25:55.810 [2024-11-20 07:23:19.857705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.810 [2024-11-20 07:23:19.857709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24788c0) on tqpair=0x2414750 00:25:55.810 [2024-11-20 07:23:19.857712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.810 [2024-11-20 07:23:19.857715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478a40) on tqpair=0x2414750 00:25:55.810 [2024-11-20 07:23:19.857717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.810 [2024-11-20 07:23:19.857720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.810 [2024-11-20 07:23:19.857723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.810 [2024-11-20 07:23:19.857728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.810 [2024-11-20 07:23:19.857730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.810 [2024-11-20 07:23:19.857732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.810 [2024-11-20 07:23:19.857737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.810 [2024-11-20 07:23:19.857747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.810 [2024-11-20 07:23:19.857784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.810 [2024-11-20 07:23:19.857791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.810 [2024-11-20 07:23:19.857793] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.810 [2024-11-20 07:23:19.857795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.810 [2024-11-20 07:23:19.857800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.810 [2024-11-20 07:23:19.857802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.810 [2024-11-20 07:23:19.857804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.810 [2024-11-20 07:23:19.857809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.810 [2024-11-20 07:23:19.857818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.810 [2024-11-20 07:23:19.857872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.810 [2024-11-20 07:23:19.857878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.810 [2024-11-20 07:23:19.857880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.810 [2024-11-20 07:23:19.857883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.810 [2024-11-20 07:23:19.857886] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:55.810 [2024-11-20 07:23:19.857888] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:55.810 [2024-11-20 07:23:19.857894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.810 [2024-11-20 07:23:19.857896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.810 [2024-11-20 07:23:19.857898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.810 [2024-11-20 07:23:19.857902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.811 [2024-11-20 07:23:19.857911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.811 [2024-11-20 07:23:19.857944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.811 [2024-11-20 07:23:19.857951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.811 [2024-11-20 07:23:19.857953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.857955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.811 [2024-11-20 07:23:19.857961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.857964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.857966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.811 [2024-11-20 07:23:19.857970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.811 [2024-11-20 07:23:19.857978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.811 [2024-11-20 07:23:19.858014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.811 [2024-11-20 07:23:19.858021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.811 [2024-11-20 07:23:19.858023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.811 [2024-11-20 07:23:19.858031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.811 [2024-11-20 07:23:19.858040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.811 [2024-11-20 07:23:19.858048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.811 [2024-11-20 07:23:19.858085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.811 [2024-11-20 07:23:19.858092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.811 [2024-11-20 07:23:19.858094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.811 [2024-11-20 07:23:19.858102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.811 [2024-11-20 07:23:19.858111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.811 [2024-11-20 07:23:19.858119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.811 [2024-11-20 07:23:19.858157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.811 [2024-11-20 07:23:19.858164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.811 [2024-11-20 07:23:19.858166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.811 [2024-11-20 07:23:19.858174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.811 [2024-11-20 07:23:19.858183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.811 [2024-11-20 07:23:19.858191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.811 [2024-11-20 07:23:19.858232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.811 [2024-11-20 07:23:19.858239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.811 [2024-11-20 07:23:19.858241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.811 [2024-11-20 07:23:19.858250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.811 [2024-11-20 07:23:19.858258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.811 [2024-11-20 07:23:19.858267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.811 [2024-11-20 07:23:19.858308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.811 [2024-11-20 07:23:19.858315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.811 [2024-11-20 07:23:19.858317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.811 [2024-11-20 07:23:19.858325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.811 [2024-11-20 07:23:19.858334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.811 [2024-11-20 07:23:19.858342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.811 [2024-11-20 07:23:19.858383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.811 [2024-11-20 07:23:19.858390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.811 [2024-11-20 07:23:19.858392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.811 [2024-11-20 07:23:19.858400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.811 [2024-11-20 07:23:19.858409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.811 [2024-11-20 07:23:19.858417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.811 [2024-11-20 07:23:19.858454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.811 [2024-11-20 07:23:19.858461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.811 [2024-11-20 07:23:19.858463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.811 [2024-11-20 07:23:19.858471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.811 [2024-11-20 07:23:19.858480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.811 [2024-11-20 07:23:19.858488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.811 [2024-11-20 07:23:19.858532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.811 [2024-11-20 07:23:19.858539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.811 [2024-11-20 07:23:19.858541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.811 [2024-11-20 07:23:19.858549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.811 [2024-11-20 07:23:19.858558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.811 [2024-11-20 07:23:19.858565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.811 [2024-11-20 07:23:19.858603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.811 [2024-11-20 07:23:19.858610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.811 [2024-11-20 07:23:19.858612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.811 [2024-11-20 07:23:19.858620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.811 [2024-11-20 07:23:19.858628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.811 [2024-11-20 07:23:19.858637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.811 [2024-11-20 07:23:19.858669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.811 [2024-11-20 07:23:19.858676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.811 [2024-11-20 07:23:19.858678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.811 [2024-11-20 07:23:19.858686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.811 [2024-11-20 07:23:19.858695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.811 [2024-11-20 07:23:19.858702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.811 [2024-11-20 07:23:19.858738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.811 [2024-11-20 07:23:19.858745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.811 [2024-11-20 07:23:19.858747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.811 [2024-11-20 07:23:19.858755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.811 [2024-11-20 07:23:19.858757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.858759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.812 [2024-11-20 07:23:19.858764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.812 [2024-11-20 07:23:19.858771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.812 [2024-11-20 07:23:19.858807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.812 [2024-11-20 07:23:19.858813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.812 [2024-11-20 07:23:19.858815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.858818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.812 [2024-11-20 07:23:19.858824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.858826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.858828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.812 [2024-11-20 07:23:19.858832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.812 [2024-11-20 07:23:19.858840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.812 [2024-11-20 07:23:19.858875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.812 [2024-11-20 07:23:19.858882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.812 [2024-11-20 07:23:19.858884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.858886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.812 [2024-11-20 07:23:19.858892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.858895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.858897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.812 [2024-11-20 07:23:19.858901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.812 [2024-11-20 07:23:19.858909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.812 [2024-11-20 07:23:19.858944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.812 [2024-11-20 07:23:19.858951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.812 [2024-11-20 07:23:19.858953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.858955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.812 [2024-11-20 07:23:19.858961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.858963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.858965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.812 [2024-11-20 07:23:19.858970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.812 [2024-11-20 07:23:19.858978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.812 [2024-11-20 07:23:19.859015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.812 [2024-11-20 07:23:19.859019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.812 [2024-11-20 07:23:19.859021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.812 [2024-11-20 07:23:19.859029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.812 [2024-11-20 07:23:19.859038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.812 [2024-11-20 07:23:19.859046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.812 [2024-11-20 07:23:19.859083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.812 [2024-11-20 07:23:19.859090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.812 [2024-11-20 07:23:19.859092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.812 [2024-11-20 07:23:19.859100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.812 [2024-11-20 07:23:19.859109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.812 [2024-11-20 07:23:19.859117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.812 [2024-11-20 07:23:19.859151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.812 [2024-11-20 07:23:19.859155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.812 [2024-11-20 07:23:19.859157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.812 [2024-11-20 07:23:19.859165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.812 [2024-11-20 07:23:19.859174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.812 [2024-11-20 07:23:19.859181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.812 [2024-11-20 07:23:19.859216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.812 [2024-11-20 07:23:19.859229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.812 [2024-11-20 07:23:19.859231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.812 [2024-11-20 07:23:19.859239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.812 [2024-11-20 07:23:19.859248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.812 [2024-11-20 07:23:19.859256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.812 [2024-11-20 07:23:19.859290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.812 [2024-11-20 07:23:19.859294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.812 [2024-11-20 07:23:19.859296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.812 [2024-11-20 07:23:19.859304] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.812 [2024-11-20 07:23:19.859312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.812 [2024-11-20 07:23:19.859320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.812 [2024-11-20 07:23:19.859356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.812 [2024-11-20 07:23:19.859362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.812 [2024-11-20 07:23:19.859364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.812 [2024-11-20 07:23:19.859373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.812 [2024-11-20 07:23:19.859382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.812 [2024-11-20 07:23:19.859389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.812 [2024-11-20 07:23:19.859425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.812 [2024-11-20 07:23:19.859432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.812 [2024-11-20 07:23:19.859434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.812 [2024-11-20 07:23:19.859442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.812 [2024-11-20 07:23:19.859451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.812 [2024-11-20 07:23:19.859458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.812 [2024-11-20 07:23:19.859498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.812 [2024-11-20 07:23:19.859505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.812 [2024-11-20 07:23:19.859507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.812 [2024-11-20 07:23:19.859515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.812 [2024-11-20 07:23:19.859524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.812 [2024-11-20 07:23:19.859532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.812 [2024-11-20 07:23:19.859564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.812 [2024-11-20 07:23:19.859570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.812 [2024-11-20 07:23:19.859572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.812 [2024-11-20 07:23:19.859574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.813 [2024-11-20 07:23:19.859581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.813 [2024-11-20 07:23:19.859589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.813 [2024-11-20 07:23:19.859597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.813 [2024-11-20 07:23:19.859634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.813 [2024-11-20 07:23:19.859641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.813 [2024-11-20 07:23:19.859643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.813 [2024-11-20 07:23:19.859652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.813 [2024-11-20 07:23:19.859660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.813 [2024-11-20 07:23:19.859668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.813 [2024-11-20 07:23:19.859707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.813 [2024-11-20 07:23:19.859714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.813 [2024-11-20 07:23:19.859716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.813 [2024-11-20 07:23:19.859724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.813 [2024-11-20 07:23:19.859733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.813 [2024-11-20 07:23:19.859741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.813 [2024-11-20 07:23:19.859779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.813 [2024-11-20 07:23:19.859786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.813 [2024-11-20 07:23:19.859787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.813 [2024-11-20 07:23:19.859796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.813 [2024-11-20 07:23:19.859804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.813 [2024-11-20 07:23:19.859812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.813 [2024-11-20 07:23:19.859848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.813 [2024-11-20 07:23:19.859854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.813 [2024-11-20 07:23:19.859856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.813 [2024-11-20 07:23:19.859865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.813 [2024-11-20 07:23:19.859873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.813 [2024-11-20 07:23:19.859881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.813 [2024-11-20 07:23:19.859914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.813 [2024-11-20 07:23:19.859921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.813 [2024-11-20 07:23:19.859923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.813 [2024-11-20 07:23:19.859932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.813 [2024-11-20 07:23:19.859940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.813 [2024-11-20 07:23:19.859949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.813 [2024-11-20 07:23:19.859988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.813 [2024-11-20 07:23:19.859995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.813 [2024-11-20 07:23:19.859997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.859999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.813 [2024-11-20 07:23:19.860005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.813 [2024-11-20 07:23:19.860014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.813 [2024-11-20 07:23:19.860022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.813 [2024-11-20 07:23:19.860055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.813 [2024-11-20 07:23:19.860062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.813 [2024-11-20 07:23:19.860064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.813 [2024-11-20 07:23:19.860072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.813 [2024-11-20 07:23:19.860081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.813 [2024-11-20 07:23:19.860089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.813 [2024-11-20 07:23:19.860120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.813 [2024-11-20 07:23:19.860127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.813 [2024-11-20 07:23:19.860129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.813 [2024-11-20 07:23:19.860137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.813 [2024-11-20 07:23:19.860145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.813 [2024-11-20 07:23:19.860153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.813 [2024-11-20 07:23:19.860187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.813 [2024-11-20 07:23:19.860190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.813 [2024-11-20 07:23:19.860192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.813 [2024-11-20 07:23:19.860201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.813 [2024-11-20 07:23:19.860209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.813 [2024-11-20 07:23:19.860217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.813 [2024-11-20 07:23:19.860263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.813 [2024-11-20 07:23:19.860270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.813 [2024-11-20 07:23:19.860272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.813 [2024-11-20 07:23:19.860280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.813 [2024-11-20 07:23:19.860289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.813 [2024-11-20 07:23:19.860297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.813 [2024-11-20 07:23:19.860328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.813 [2024-11-20 07:23:19.860333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.813 [2024-11-20 07:23:19.860335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.813 [2024-11-20 07:23:19.860343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.813 [2024-11-20 07:23:19.860347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.813 [2024-11-20 07:23:19.860351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.813 [2024-11-20 07:23:19.860359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.814 [2024-11-20 07:23:19.860392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.814 [2024-11-20 07:23:19.860398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.814 [2024-11-20 07:23:19.860400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.814 [2024-11-20 07:23:19.860409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.814 [2024-11-20 07:23:19.860418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.814 [2024-11-20 07:23:19.860425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.814 [2024-11-20 07:23:19.860463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.814 [2024-11-20 07:23:19.860466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.814 [2024-11-20 07:23:19.860468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.814 [2024-11-20 07:23:19.860476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.814 [2024-11-20 07:23:19.860485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.814 [2024-11-20 07:23:19.860493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.814 [2024-11-20 07:23:19.860533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.814 [2024-11-20 07:23:19.860539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.814 [2024-11-20 07:23:19.860541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.814 [2024-11-20 07:23:19.860549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.814 [2024-11-20 07:23:19.860558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.814 [2024-11-20 07:23:19.860566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.814 [2024-11-20 07:23:19.860598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.814 [2024-11-20 07:23:19.860605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.814 [2024-11-20 07:23:19.860606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.814 [2024-11-20 07:23:19.860615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.814 [2024-11-20 07:23:19.860623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.814 [2024-11-20 07:23:19.860631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.814 [2024-11-20 07:23:19.860665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.814 [2024-11-20 07:23:19.860672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.814 [2024-11-20 07:23:19.860689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.814 [2024-11-20 07:23:19.860698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.814 [2024-11-20 07:23:19.860706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.814 [2024-11-20 07:23:19.860715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.814 [2024-11-20 07:23:19.860751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.814 [2024-11-20 07:23:19.860757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.814 [2024-11-20 07:23:19.860759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.814 [2024-11-20 07:23:19.860768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.814 [2024-11-20 07:23:19.860776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.814 [2024-11-20 07:23:19.860784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.814 [2024-11-20 07:23:19.860819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.814 [2024-11-20 07:23:19.860826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.814 [2024-11-20 07:23:19.860828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.814 [2024-11-20 07:23:19.860836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.814 [2024-11-20 07:23:19.860845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.814 [2024-11-20 07:23:19.860853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.814 [2024-11-20 07:23:19.860890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.814 [2024-11-20 07:23:19.860897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.814 [2024-11-20 07:23:19.860899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.814 [2024-11-20 07:23:19.860908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.814 [2024-11-20 07:23:19.860916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.814 [2024-11-20 07:23:19.860924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.814 [2024-11-20 07:23:19.860957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.814 [2024-11-20 07:23:19.860964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.814 [2024-11-20 07:23:19.860966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.814 [2024-11-20 07:23:19.860974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.860979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.814 [2024-11-20 07:23:19.860983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.814 [2024-11-20 07:23:19.860991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.814 [2024-11-20 07:23:19.861027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.814 [2024-11-20 07:23:19.861034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.814 [2024-11-20 07:23:19.861035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.861038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.814 [2024-11-20 07:23:19.861044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.861046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.861048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.814 [2024-11-20 07:23:19.861052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.814 [2024-11-20 07:23:19.861060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.814 [2024-11-20 07:23:19.861096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.814 [2024-11-20 07:23:19.861102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.814 [2024-11-20 07:23:19.861104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.861107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.814 [2024-11-20 07:23:19.861113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.861115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.861117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.814 [2024-11-20 07:23:19.861121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.814 [2024-11-20 07:23:19.861129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.814 [2024-11-20 07:23:19.861171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.814 [2024-11-20 07:23:19.861177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.814 [2024-11-20 07:23:19.861179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.861181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.814 [2024-11-20 07:23:19.861188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.861190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.814 [2024-11-20 07:23:19.861192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.814 [2024-11-20 07:23:19.861196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.815 [2024-11-20 07:23:19.861204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.815 [2024-11-20 07:23:19.861247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.815 [2024-11-20 07:23:19.861252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.815 [2024-11-20 07:23:19.861254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.815 [2024-11-20 07:23:19.861262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.815 [2024-11-20 07:23:19.861271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.815 [2024-11-20 07:23:19.861280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.815 [2024-11-20 07:23:19.861318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.815 [2024-11-20 07:23:19.861322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.815 [2024-11-20 07:23:19.861324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.815 [2024-11-20 07:23:19.861332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.815 [2024-11-20 07:23:19.861340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.815 [2024-11-20 07:23:19.861348] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.815 [2024-11-20 07:23:19.861384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.815 [2024-11-20 07:23:19.861390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.815 [2024-11-20 07:23:19.861392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.815 [2024-11-20 07:23:19.861401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.815 [2024-11-20 07:23:19.861410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.815 [2024-11-20 07:23:19.861418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.815 [2024-11-20 07:23:19.861462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.815 [2024-11-20 07:23:19.861468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.815 [2024-11-20 07:23:19.861470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.815 [2024-11-20 07:23:19.861479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.815 [2024-11-20 07:23:19.861488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.815 [2024-11-20 07:23:19.861496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.815 [2024-11-20 07:23:19.861533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.815 [2024-11-20 07:23:19.861537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.815 [2024-11-20 07:23:19.861539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.815 [2024-11-20 07:23:19.861547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.815 [2024-11-20 07:23:19.861556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.815 [2024-11-20 07:23:19.861564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.815 [2024-11-20 07:23:19.861596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.815 [2024-11-20 07:23:19.861603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.815 [2024-11-20 07:23:19.861605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.815 [2024-11-20 07:23:19.861613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.815 [2024-11-20 07:23:19.861622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.815 [2024-11-20 07:23:19.861630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.815 [2024-11-20 07:23:19.861667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.815 [2024-11-20 07:23:19.861671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.815 [2024-11-20 07:23:19.861673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.815 [2024-11-20 07:23:19.861681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.815 [2024-11-20 07:23:19.861690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.815 [2024-11-20 07:23:19.861697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.815 [2024-11-20 07:23:19.861736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.815 [2024-11-20 07:23:19.861741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.815 [2024-11-20 07:23:19.861743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.815 [2024-11-20 07:23:19.861751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.815 [2024-11-20 07:23:19.861759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.815 [2024-11-20 07:23:19.861767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.815 [2024-11-20 07:23:19.861803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.815 [2024-11-20 07:23:19.861807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.815 [2024-11-20 07:23:19.861809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.815 [2024-11-20 07:23:19.861817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.815 [2024-11-20 07:23:19.861825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.815 [2024-11-20 07:23:19.861833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.815 [2024-11-20 07:23:19.861874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.815 [2024-11-20 07:23:19.861878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.815 [2024-11-20 07:23:19.861880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.815 [2024-11-20 07:23:19.861888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.815 [2024-11-20 07:23:19.861892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.815 [2024-11-20 07:23:19.861897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.815 [2024-11-20 07:23:19.861905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.815 [2024-11-20 07:23:19.861942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.815 [2024-11-20 07:23:19.861949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.816 [2024-11-20 07:23:19.861951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.861953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.816 [2024-11-20 07:23:19.861959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.861961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.861963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.816 [2024-11-20 07:23:19.861968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.816 [2024-11-20 07:23:19.861976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.816 [2024-11-20 07:23:19.862018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.816 [2024-11-20 07:23:19.862025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.816 [2024-11-20 07:23:19.862027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.816 [2024-11-20 07:23:19.862035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.816 [2024-11-20 07:23:19.862044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.816 [2024-11-20 07:23:19.862052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.816 [2024-11-20 07:23:19.862089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.816 [2024-11-20 07:23:19.862093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.816 [2024-11-20 07:23:19.862095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.816 [2024-11-20 07:23:19.862103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.816 [2024-11-20 07:23:19.862112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.816 [2024-11-20 07:23:19.862119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.816 [2024-11-20 07:23:19.862161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.816 [2024-11-20 07:23:19.862168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.816 [2024-11-20 07:23:19.862170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.816 [2024-11-20 07:23:19.862178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.816 [2024-11-20 07:23:19.862187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.816 [2024-11-20 07:23:19.862195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.816 [2024-11-20 07:23:19.862237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.816 [2024-11-20 07:23:19.862241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.816 [2024-11-20 07:23:19.862243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.816 [2024-11-20 07:23:19.862251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.816 [2024-11-20 07:23:19.862260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.816 [2024-11-20 07:23:19.862268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.816 [2024-11-20 07:23:19.862299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.816 [2024-11-20 07:23:19.862306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.816 [2024-11-20 07:23:19.862308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.816 [2024-11-20 07:23:19.862316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.816 [2024-11-20 07:23:19.862325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.816 [2024-11-20 07:23:19.862333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.816 [2024-11-20 07:23:19.862366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.816 [2024-11-20 07:23:19.862373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.816 [2024-11-20 07:23:19.862375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.816 [2024-11-20 07:23:19.862383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.816 [2024-11-20 07:23:19.862392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.816 [2024-11-20 07:23:19.862401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.816 [2024-11-20 07:23:19.862438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.816 [2024-11-20 07:23:19.862442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.816 [2024-11-20 07:23:19.862444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.816 [2024-11-20 07:23:19.862452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.816 [2024-11-20 07:23:19.862461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.816 [2024-11-20 07:23:19.862469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.816 [2024-11-20 07:23:19.862511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.816 [2024-11-20 07:23:19.862517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.816 [2024-11-20 07:23:19.862519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.816 [2024-11-20 07:23:19.862528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.816 [2024-11-20 07:23:19.862536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.816 [2024-11-20 07:23:19.862545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.816 [2024-11-20 07:23:19.862580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.816 [2024-11-20 07:23:19.862587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.816 [2024-11-20 07:23:19.862589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.816 [2024-11-20 07:23:19.862597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.816 [2024-11-20 07:23:19.862606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.816 [2024-11-20 07:23:19.862614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.816 [2024-11-20 07:23:19.862648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.816 [2024-11-20 07:23:19.862654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.816 [2024-11-20 07:23:19.862656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.816 [2024-11-20 07:23:19.862665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.816 [2024-11-20 07:23:19.862673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.816 [2024-11-20 07:23:19.862681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.816 [2024-11-20 07:23:19.862716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.816 [2024-11-20 07:23:19.862720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.816 [2024-11-20 07:23:19.862722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.816 [2024-11-20 07:23:19.862730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.816 [2024-11-20 07:23:19.862735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.816 [2024-11-20 07:23:19.862739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.817 [2024-11-20 07:23:19.862747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.817 [2024-11-20 07:23:19.862787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.817 [2024-11-20 07:23:19.862793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.817 [2024-11-20 07:23:19.862795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.862797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.817 [2024-11-20 07:23:19.862804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.862806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.862808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.817 [2024-11-20 07:23:19.862812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.817 [2024-11-20 07:23:19.862820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.817 [2024-11-20 07:23:19.862858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.817 [2024-11-20 07:23:19.862865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.817 [2024-11-20 07:23:19.862867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.862869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.817 [2024-11-20 07:23:19.862875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.862877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.862879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.817 [2024-11-20 07:23:19.862883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.817 [2024-11-20 07:23:19.862891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.817 [2024-11-20 07:23:19.862931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.817 [2024-11-20 07:23:19.862938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.817 [2024-11-20 07:23:19.862939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.862942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.817 [2024-11-20 07:23:19.862948] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.862950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.862952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.817 [2024-11-20 07:23:19.862956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.817 [2024-11-20 07:23:19.862964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.817 [2024-11-20 07:23:19.863004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.817 [2024-11-20 07:23:19.863007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.817 [2024-11-20 07:23:19.863009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.863012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.817 [2024-11-20 07:23:19.863018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.863020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.863022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.817 [2024-11-20 07:23:19.863026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.817 [2024-11-20 07:23:19.863034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.817 [2024-11-20 07:23:19.863071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.817 [2024-11-20 07:23:19.863075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.817 [2024-11-20 07:23:19.863077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.863079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.817 [2024-11-20 07:23:19.863085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.863088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.863090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.817 [2024-11-20 07:23:19.863094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.817 [2024-11-20 07:23:19.863102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.817 [2024-11-20 07:23:19.863141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.817 [2024-11-20 07:23:19.863145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.817 [2024-11-20 07:23:19.863147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.863149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.817 [2024-11-20 07:23:19.863155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.863157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.863159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.817 [2024-11-20 07:23:19.863164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.817 [2024-11-20 07:23:19.863171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.817 [2024-11-20 07:23:19.863211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.817 [2024-11-20 07:23:19.863214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.817 [2024-11-20 07:23:19.863216] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.863218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.817 [2024-11-20 07:23:19.867239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.867244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.867246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2414750) 00:25:55.817 [2024-11-20 07:23:19.867251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.817 [2024-11-20 07:23:19.867265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2478bc0, cid 3, qid 0 00:25:55.817 [2024-11-20 07:23:19.867307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:55.817 [2024-11-20 07:23:19.867311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:55.817 [2024-11-20 07:23:19.867313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:55.817 [2024-11-20 07:23:19.867315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2478bc0) on tqpair=0x2414750 00:25:55.817 [2024-11-20 07:23:19.867320] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 9 milliseconds 00:25:55.817 0% 00:25:55.817 Data Units Read: 0 00:25:55.817 Data Units Written: 0 00:25:55.817 Host Read Commands: 0 00:25:55.817 Host Write Commands: 0 00:25:55.817 Controller Busy Time: 0 minutes 00:25:55.817 Power Cycles: 0 00:25:55.817 Power On Hours: 0 hours 00:25:55.817 Unsafe Shutdowns: 0 00:25:55.817 Unrecoverable Media Errors: 0 00:25:55.817 Lifetime Error Log Entries: 0 00:25:55.817 Warning Temperature Time: 0 minutes 00:25:55.817 Critical Temperature Time: 0 minutes 00:25:55.817 00:25:55.817 Number of Queues 00:25:55.817 ================ 00:25:55.817 Number of I/O Submission Queues: 127 00:25:55.817 Number of I/O Completion Queues: 127 00:25:55.817 00:25:55.817 Active Namespaces 00:25:55.817 ================= 00:25:55.817 Namespace ID:1 00:25:55.817 Error Recovery Timeout: Unlimited 00:25:55.817 Command Set Identifier: NVM (00h) 00:25:55.817 Deallocate: Supported 00:25:55.817 Deallocated/Unwritten Error: Not Supported 00:25:55.817 Deallocated Read Value: Unknown 00:25:55.817 Deallocate in Write Zeroes: Not Supported 00:25:55.817 Deallocated Guard Field: 0xFFFF 00:25:55.817 Flush: Supported 00:25:55.817 Reservation: Supported 00:25:55.817 Namespace Sharing Capabilities: Multiple Controllers 00:25:55.817 Size (in LBAs): 131072 (0GiB) 00:25:55.817 Capacity (in LBAs): 131072 (0GiB) 00:25:55.817 Utilization (in LBAs): 131072 (0GiB) 00:25:55.817 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:55.817 EUI64: ABCDEF0123456789 00:25:55.817 UUID: 71eee607-ef40-46e5-9605-68a03e4a198b 00:25:55.817 Thin Provisioning: Not Supported 00:25:55.817 Per-NS Atomic Units: Yes 00:25:55.817 Atomic Boundary Size (Normal): 0 00:25:55.817 Atomic Boundary Size (PFail): 0 00:25:55.817 Atomic Boundary Offset: 0 00:25:55.817 Maximum Single Source Range Length: 65535 00:25:55.817 Maximum Copy Length: 65535 00:25:55.817 Maximum Source Range Count: 1 00:25:55.817 NGUID/EUI64 Never Reused: No 00:25:55.817 Namespace Write Protected: No 00:25:55.817 Number of LBA Formats: 1 00:25:55.817 Current LBA Format: LBA Format #00 00:25:55.817 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:55.817 00:25:55.817 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:55.817 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:55.817 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.817 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:55.817 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.817 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:55.817 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:55.817 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:55.817 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:25:55.817 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:55.817 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:25:55.817 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:55.818 rmmod nvme_tcp 00:25:55.818 rmmod nvme_fabrics 00:25:55.818 rmmod nvme_keyring 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 72622 ']' 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 72622 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 72622 ']' 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 72622 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72622 00:25:55.818 killing process with pid 72622 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72622' 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 72622 00:25:55.818 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 72622 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@254 -- # local dev 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # continue 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # continue 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@274 -- # iptr 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-save 00:25:56.076 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-restore 00:25:56.336 ************************************ 00:25:56.336 END TEST nvmf_identify 00:25:56.336 ************************************ 00:25:56.336 00:25:56.336 real 0m2.346s 00:25:56.336 user 0m6.205s 00:25:56.336 sys 0m0.557s 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.336 ************************************ 00:25:56.336 START TEST nvmf_perf 00:25:56.336 ************************************ 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:56.336 * Looking for test storage... 00:25:56.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.336 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:56.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.336 --rc genhtml_branch_coverage=1 00:25:56.336 --rc genhtml_function_coverage=1 00:25:56.336 --rc genhtml_legend=1 00:25:56.336 --rc geninfo_all_blocks=1 00:25:56.336 --rc geninfo_unexecuted_blocks=1 00:25:56.336 00:25:56.336 ' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:56.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.337 --rc genhtml_branch_coverage=1 00:25:56.337 --rc genhtml_function_coverage=1 00:25:56.337 --rc genhtml_legend=1 00:25:56.337 --rc geninfo_all_blocks=1 00:25:56.337 --rc geninfo_unexecuted_blocks=1 00:25:56.337 00:25:56.337 ' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:56.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.337 --rc genhtml_branch_coverage=1 00:25:56.337 --rc genhtml_function_coverage=1 00:25:56.337 --rc genhtml_legend=1 00:25:56.337 --rc geninfo_all_blocks=1 00:25:56.337 --rc geninfo_unexecuted_blocks=1 00:25:56.337 00:25:56.337 ' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:56.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.337 --rc genhtml_branch_coverage=1 00:25:56.337 --rc genhtml_function_coverage=1 00:25:56.337 --rc genhtml_legend=1 00:25:56.337 --rc geninfo_all_blocks=1 00:25:56.337 --rc geninfo_unexecuted_blocks=1 00:25:56.337 00:25:56.337 ' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:56.337 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@223 -- # create_target_ns 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # return 0 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:56.337 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up target0 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:56.338 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:56.597 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:56.597 10.0.0.1 00:25:56.597 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:56.597 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:56.597 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.597 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:56.598 10.0.0.2 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up target1 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772163 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:56.598 10.0.0.3 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772164 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:56.598 10.0.0.4 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.598 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator0 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:56.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:25:56.599 00:25:56.599 --- 10.0.0.1 ping statistics --- 00:25:56.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.599 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target0 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target0 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:56.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.021 ms 00:25:56.599 00:25:56.599 --- 10.0.0.2 ping statistics --- 00:25:56.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.599 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:56.599 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:56.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:25:56.599 00:25:56.599 --- 10.0.0.3 ping statistics --- 00:25:56.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.599 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.599 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:56.600 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:56.600 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.102 ms 00:25:56.600 00:25:56.600 --- 10.0.0.4 ping statistics --- 00:25:56.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.600 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # return 0 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator0 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target0 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target0 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target1 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:56.600 ' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=72877 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 72877 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 72877 ']' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:56.600 07:23:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:56.859 [2024-11-20 07:23:20.825070] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:56.859 [2024-11-20 07:23:20.825128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.859 [2024-11-20 07:23:20.962452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:56.859 [2024-11-20 07:23:20.994525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.859 [2024-11-20 07:23:20.994565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.859 [2024-11-20 07:23:20.994571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.859 [2024-11-20 07:23:20.994575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.859 [2024-11-20 07:23:20.994579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.859 [2024-11-20 07:23:20.995185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.859 [2024-11-20 07:23:20.995364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.859 [2024-11-20 07:23:20.995764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.859 [2024-11-20 07:23:20.996173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:56.859 [2024-11-20 07:23:21.029739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:57.791 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.791 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:57.791 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:57.791 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:57.791 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:57.791 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.792 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:25:57.792 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:58.049 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:25:58.049 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:58.049 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:25:58.049 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:58.306 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:58.306 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:25:58.306 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:58.306 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:58.306 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:58.563 [2024-11-20 07:23:22.647371] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.563 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:58.820 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:58.820 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:58.820 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:58.820 07:23:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:59.078 07:23:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.336 [2024-11-20 07:23:23.284172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.336 07:23:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:59.336 07:23:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:59.336 07:23:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:25:59.336 07:23:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:59.336 07:23:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:00.709 Initializing NVMe Controllers 00:26:00.709 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:00.709 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:26:00.709 Initialization complete. Launching workers. 00:26:00.710 ======================================================== 00:26:00.710 Latency(us) 00:26:00.710 Device Information : IOPS MiB/s Average min max 00:26:00.710 PCIE (0000:00:10.0) NSID 1 from core 0: 32395.95 126.55 987.50 55.32 16090.17 00:26:00.710 ======================================================== 00:26:00.710 Total : 32395.95 126.55 987.50 55.32 16090.17 00:26:00.710 00:26:00.710 07:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:01.645 Initializing NVMe Controllers 00:26:01.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:01.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:01.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:01.645 Initialization complete. Launching workers. 00:26:01.645 ======================================================== 00:26:01.645 Latency(us) 00:26:01.645 Device Information : IOPS MiB/s Average min max 00:26:01.645 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5376.41 21.00 185.73 66.32 4206.77 00:26:01.645 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8095.28 7966.86 12022.16 00:26:01.645 ======================================================== 00:26:01.645 Total : 5500.91 21.49 364.74 66.32 12022.16 00:26:01.645 00:26:01.903 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:03.275 Initializing NVMe Controllers 00:26:03.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:03.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:03.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:03.275 Initialization complete. Launching workers. 00:26:03.275 ======================================================== 00:26:03.275 Latency(us) 00:26:03.275 Device Information : IOPS MiB/s Average min max 00:26:03.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11312.17 44.19 2830.06 479.79 6368.02 00:26:03.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3712.43 14.50 8728.50 6915.61 72274.95 00:26:03.275 ======================================================== 00:26:03.275 Total : 15024.60 58.69 4287.51 479.79 72274.95 00:26:03.275 00:26:03.275 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:26:03.275 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:05.807 Initializing NVMe Controllers 00:26:05.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:05.807 Controller IO queue size 128, less than required. 00:26:05.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:05.807 Controller IO queue size 128, less than required. 00:26:05.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:05.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:05.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:05.807 Initialization complete. Launching workers. 00:26:05.807 ======================================================== 00:26:05.807 Latency(us) 00:26:05.807 Device Information : IOPS MiB/s Average min max 00:26:05.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2285.56 571.39 56246.46 28428.40 85434.21 00:26:05.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 693.00 173.25 195365.45 65781.45 311062.13 00:26:05.807 ======================================================== 00:26:05.807 Total : 2978.57 744.64 88614.35 28428.40 311062.13 00:26:05.807 00:26:05.807 07:23:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:06.066 Initializing NVMe Controllers 00:26:06.066 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:06.066 Controller IO queue size 128, less than required. 00:26:06.066 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:06.066 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:06.066 Controller IO queue size 128, less than required. 00:26:06.066 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:06.066 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:26:06.066 WARNING: Some requested NVMe devices were skipped 00:26:06.066 No valid NVMe controllers or AIO or URING devices found 00:26:06.066 07:23:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:08.594 Initializing NVMe Controllers 00:26:08.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:08.594 Controller IO queue size 128, less than required. 00:26:08.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:08.594 Controller IO queue size 128, less than required. 00:26:08.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:08.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:08.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:08.594 Initialization complete. Launching workers. 00:26:08.594 00:26:08.594 ==================== 00:26:08.594 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:08.594 TCP transport: 00:26:08.594 polls: 19981 00:26:08.594 idle_polls: 11296 00:26:08.594 sock_completions: 8685 00:26:08.594 nvme_completions: 11161 00:26:08.594 submitted_requests: 16822 00:26:08.594 queued_requests: 1 00:26:08.594 00:26:08.594 ==================== 00:26:08.594 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:08.594 TCP transport: 00:26:08.594 polls: 20286 00:26:08.594 idle_polls: 13503 00:26:08.594 sock_completions: 6783 00:26:08.594 nvme_completions: 9103 00:26:08.594 submitted_requests: 13632 00:26:08.594 queued_requests: 1 00:26:08.594 ======================================================== 00:26:08.594 Latency(us) 00:26:08.594 Device Information : IOPS MiB/s Average min max 00:26:08.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2789.93 697.48 46515.14 19984.72 75742.43 00:26:08.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2275.45 568.86 56253.82 25297.15 101562.88 00:26:08.594 ======================================================== 00:26:08.594 Total : 5065.38 1266.35 50889.91 19984.72 101562.88 00:26:08.594 00:26:08.594 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:08.594 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:08.852 rmmod nvme_tcp 00:26:08.852 rmmod nvme_fabrics 00:26:08.852 rmmod nvme_keyring 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 72877 ']' 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 72877 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 72877 ']' 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 72877 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72877 00:26:08.852 killing process with pid 72877 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72877' 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 72877 00:26:08.852 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 72877 00:26:10.303 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:10.303 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:26:10.303 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@254 -- # local dev 00:26:10.303 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:10.303 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:10.303 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:10.303 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # continue 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # continue 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@274 -- # iptr 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-save 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-restore 00:26:10.577 00:26:10.577 real 0m14.328s 00:26:10.577 user 0m51.308s 00:26:10.577 sys 0m3.249s 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:10.577 ************************************ 00:26:10.577 END TEST nvmf_perf 00:26:10.577 ************************************ 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.577 ************************************ 00:26:10.577 START TEST nvmf_fio_host 00:26:10.577 ************************************ 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:10.577 * Looking for test storage... 00:26:10.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:10.577 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:10.837 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:10.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.838 --rc genhtml_branch_coverage=1 00:26:10.838 --rc genhtml_function_coverage=1 00:26:10.838 --rc genhtml_legend=1 00:26:10.838 --rc geninfo_all_blocks=1 00:26:10.838 --rc geninfo_unexecuted_blocks=1 00:26:10.838 00:26:10.838 ' 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:10.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.838 --rc genhtml_branch_coverage=1 00:26:10.838 --rc genhtml_function_coverage=1 00:26:10.838 --rc genhtml_legend=1 00:26:10.838 --rc geninfo_all_blocks=1 00:26:10.838 --rc geninfo_unexecuted_blocks=1 00:26:10.838 00:26:10.838 ' 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:10.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.838 --rc genhtml_branch_coverage=1 00:26:10.838 --rc genhtml_function_coverage=1 00:26:10.838 --rc genhtml_legend=1 00:26:10.838 --rc geninfo_all_blocks=1 00:26:10.838 --rc geninfo_unexecuted_blocks=1 00:26:10.838 00:26:10.838 ' 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:10.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.838 --rc genhtml_branch_coverage=1 00:26:10.838 --rc genhtml_function_coverage=1 00:26:10.838 --rc genhtml_legend=1 00:26:10.838 --rc geninfo_all_blocks=1 00:26:10.838 --rc geninfo_unexecuted_blocks=1 00:26:10.838 00:26:10.838 ' 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:10.838 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:10.838 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@280 -- # nvmf_veth_init 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@223 -- # create_target_ns 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # create_main_bridge 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@105 -- # delete_main_bridge 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # return 0 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up initiator0 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up target0 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target0 up 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up target0_br 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # add_to_ns target0 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:10.839 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:10.840 10.0.0.1 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:26:10.840 10.0.0.2 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@66 -- # set_up initiator0 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up target0_br 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:10.840 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up initiator1 00:26:10.840 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up target1 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target1 up 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up target1_br 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # add_to_ns target1 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:26:10.841 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772163 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:26:11.100 10.0.0.3 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772164 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:26:11.100 10.0.0.4 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@66 -- # set_up initiator1 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:26:11.100 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up target1_br 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 2 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator0 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:11.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:26:11.101 00:26:11.101 --- 10.0.0.1 ping statistics --- 00:26:11.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.101 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target0 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target0 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:11.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:26:11.101 00:26:11.101 --- 10.0.0.2 ping statistics --- 00:26:11.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.101 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:26:11.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:11.101 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:26:11.101 00:26:11.101 --- 10.0.0.3 ping statistics --- 00:26:11.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.101 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target1 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:11.101 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:26:11.102 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:11.102 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:26:11.102 00:26:11.102 --- 10.0.0.4 ping statistics --- 00:26:11.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.102 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # return 0 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator0 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target0 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target0 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target1 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:11.102 ' 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=73330 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 73330 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 73330 ']' 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.102 07:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.102 [2024-11-20 07:23:35.289268] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:26:11.103 [2024-11-20 07:23:35.289335] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.361 [2024-11-20 07:23:35.427894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:11.361 [2024-11-20 07:23:35.465763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.361 [2024-11-20 07:23:35.465921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.361 [2024-11-20 07:23:35.465983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.361 [2024-11-20 07:23:35.466011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.361 [2024-11-20 07:23:35.466060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.361 [2024-11-20 07:23:35.466755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.361 [2024-11-20 07:23:35.466851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.361 [2024-11-20 07:23:35.467056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:11.361 [2024-11-20 07:23:35.467082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.361 [2024-11-20 07:23:35.499112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:12.294 07:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:12.294 07:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:26:12.294 07:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:12.294 [2024-11-20 07:23:36.401044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.294 07:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:12.294 07:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:12.294 07:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.294 07:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:12.552 Malloc1 00:26:12.552 07:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:12.810 07:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:13.070 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.070 [2024-11-20 07:23:37.258738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.328 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:13.328 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:26:13.328 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:13.328 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:13.328 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:13.328 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:13.328 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:13.328 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:13.329 07:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:13.587 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:13.587 fio-3.35 00:26:13.587 Starting 1 thread 00:26:16.155 00:26:16.155 test: (groupid=0, jobs=1): err= 0: pid=73413: Wed Nov 20 07:23:39 2024 00:26:16.155 read: IOPS=9722, BW=38.0MiB/s (39.8MB/s)(76.2MiB/2006msec) 00:26:16.155 slat (nsec): min=1902, max=380956, avg=2100.36, stdev=3526.23 00:26:16.155 clat (usec): min=2728, max=37769, avg=6867.14, stdev=2369.05 00:26:16.155 lat (usec): min=2730, max=37771, avg=6869.24, stdev=2369.04 00:26:16.155 clat percentiles (usec): 00:26:16.155 | 1.00th=[ 4555], 5.00th=[ 4883], 10.00th=[ 5080], 20.00th=[ 5473], 00:26:16.155 | 30.00th=[ 6128], 40.00th=[ 6521], 50.00th=[ 6718], 60.00th=[ 6915], 00:26:16.155 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7898], 95.00th=[ 9896], 00:26:16.155 | 99.00th=[14746], 99.50th=[22938], 99.90th=[35390], 99.95th=[35390], 00:26:16.155 | 99.99th=[38011] 00:26:16.155 bw ( KiB/s): min=31064, max=47392, per=99.98%, avg=38882.00, stdev=6677.84, samples=4 00:26:16.155 iops : min= 7766, max=11848, avg=9720.50, stdev=1669.46, samples=4 00:26:16.155 write: IOPS=9734, BW=38.0MiB/s (39.9MB/s)(76.3MiB/2006msec); 0 zone resets 00:26:16.155 slat (nsec): min=1947, max=291788, avg=2199.72, stdev=2339.02 00:26:16.155 clat (usec): min=2229, max=34507, avg=6223.31, stdev=2180.61 00:26:16.155 lat (usec): min=2231, max=34509, avg=6225.51, stdev=2180.62 00:26:16.155 clat percentiles (usec): 00:26:16.155 | 1.00th=[ 3785], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4948], 00:26:16.155 | 30.00th=[ 5538], 40.00th=[ 5866], 50.00th=[ 6128], 60.00th=[ 6259], 00:26:16.155 | 70.00th=[ 6390], 80.00th=[ 6652], 90.00th=[ 7111], 95.00th=[ 8979], 00:26:16.155 | 99.00th=[13829], 99.50th=[21627], 99.90th=[31065], 99.95th=[31851], 00:26:16.155 | 99.99th=[32637] 00:26:16.155 bw ( KiB/s): min=30336, max=48056, per=99.97%, avg=38926.00, stdev=7250.11, samples=4 00:26:16.155 iops : min= 7584, max=12014, avg=9731.50, stdev=1812.53, samples=4 00:26:16.155 lat (msec) : 4=0.86%, 10=95.28%, 20=3.27%, 50=0.58% 00:26:16.155 cpu : usr=79.75%, sys=16.01%, ctx=5, majf=0, minf=7 00:26:16.155 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:16.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:16.155 issued rwts: total=19503,19528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:16.155 00:26:16.155 Run status group 0 (all jobs): 00:26:16.155 READ: bw=38.0MiB/s (39.8MB/s), 38.0MiB/s-38.0MiB/s (39.8MB/s-39.8MB/s), io=76.2MiB (79.9MB), run=2006-2006msec 00:26:16.155 WRITE: bw=38.0MiB/s (39.9MB/s), 38.0MiB/s-38.0MiB/s (39.9MB/s-39.9MB/s), io=76.3MiB (80.0MB), run=2006-2006msec 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:16.155 07:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:16.155 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:16.155 fio-3.35 00:26:16.155 Starting 1 thread 00:26:18.684 00:26:18.684 test: (groupid=0, jobs=1): err= 0: pid=73456: Wed Nov 20 07:23:42 2024 00:26:18.684 read: IOPS=10.0k, BW=157MiB/s (165MB/s)(315MiB/2004msec) 00:26:18.684 slat (usec): min=3, max=109, avg= 3.41, stdev= 1.67 00:26:18.684 clat (usec): min=2173, max=22686, avg=6925.53, stdev=2295.40 00:26:18.684 lat (usec): min=2176, max=22689, avg=6928.94, stdev=2295.53 00:26:18.684 clat percentiles (usec): 00:26:18.684 | 1.00th=[ 3359], 5.00th=[ 3916], 10.00th=[ 4293], 20.00th=[ 4883], 00:26:18.684 | 30.00th=[ 5407], 40.00th=[ 5997], 50.00th=[ 6587], 60.00th=[ 7308], 00:26:18.684 | 70.00th=[ 8029], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[10683], 00:26:18.684 | 99.00th=[13173], 99.50th=[14222], 99.90th=[20317], 99.95th=[20841], 00:26:18.684 | 99.99th=[21103] 00:26:18.684 bw ( KiB/s): min=77344, max=89179, per=50.64%, avg=81414.75, stdev=5301.03, samples=4 00:26:18.684 iops : min= 4834, max= 5573, avg=5088.25, stdev=330.98, samples=4 00:26:18.684 write: IOPS=5937, BW=92.8MiB/s (97.3MB/s)(167MiB/1795msec); 0 zone resets 00:26:18.684 slat (usec): min=36, max=441, avg=38.31, stdev= 7.42 00:26:18.684 clat (usec): min=2514, max=17772, avg=10068.00, stdev=1530.59 00:26:18.684 lat (usec): min=2552, max=17810, avg=10106.31, stdev=1530.41 00:26:18.684 clat percentiles (usec): 00:26:18.684 | 1.00th=[ 6128], 5.00th=[ 7898], 10.00th=[ 8291], 20.00th=[ 8848], 00:26:18.684 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10421], 00:26:18.684 | 70.00th=[10814], 80.00th=[11338], 90.00th=[11863], 95.00th=[12387], 00:26:18.684 | 99.00th=[13829], 99.50th=[15008], 99.90th=[16581], 99.95th=[17171], 00:26:18.684 | 99.99th=[17695] 00:26:18.684 bw ( KiB/s): min=81184, max=91792, per=89.18%, avg=84716.00, stdev=4993.22, samples=4 00:26:18.684 iops : min= 5074, max= 5737, avg=5294.75, stdev=312.08, samples=4 00:26:18.684 lat (msec) : 4=4.15%, 10=72.39%, 20=23.36%, 50=0.09% 00:26:18.684 cpu : usr=84.17%, sys=11.88%, ctx=20, majf=0, minf=3 00:26:18.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:18.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:18.684 issued rwts: total=20137,10657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.684 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:18.684 00:26:18.684 Run status group 0 (all jobs): 00:26:18.684 READ: bw=157MiB/s (165MB/s), 157MiB/s-157MiB/s (165MB/s-165MB/s), io=315MiB (330MB), run=2004-2004msec 00:26:18.684 WRITE: bw=92.8MiB/s (97.3MB/s), 92.8MiB/s-92.8MiB/s (97.3MB/s-97.3MB/s), io=167MiB (175MB), run=1795-1795msec 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:18.684 rmmod nvme_tcp 00:26:18.684 rmmod nvme_fabrics 00:26:18.684 rmmod nvme_keyring 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 73330 ']' 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 73330 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 73330 ']' 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 73330 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73330 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:18.684 killing process with pid 73330 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73330' 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 73330 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 73330 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@254 -- # local dev 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:18.684 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # continue 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # continue 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@274 -- # iptr 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-save 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-restore 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:18.943 00:26:18.943 real 0m8.289s 00:26:18.943 user 0m33.841s 00:26:18.943 sys 0m1.895s 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:18.943 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.943 ************************************ 00:26:18.943 END TEST nvmf_fio_host 00:26:18.943 ************************************ 00:26:18.943 07:23:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:18.943 07:23:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:18.943 07:23:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:18.943 07:23:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.943 ************************************ 00:26:18.943 START TEST nvmf_failover 00:26:18.943 ************************************ 00:26:18.943 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:18.943 * Looking for test storage... 00:26:18.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:18.943 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:18.943 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:26:18.943 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:18.943 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:18.943 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:18.943 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:18.944 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:19.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.204 --rc genhtml_branch_coverage=1 00:26:19.204 --rc genhtml_function_coverage=1 00:26:19.204 --rc genhtml_legend=1 00:26:19.204 --rc geninfo_all_blocks=1 00:26:19.204 --rc geninfo_unexecuted_blocks=1 00:26:19.204 00:26:19.204 ' 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:19.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.204 --rc genhtml_branch_coverage=1 00:26:19.204 --rc genhtml_function_coverage=1 00:26:19.204 --rc genhtml_legend=1 00:26:19.204 --rc geninfo_all_blocks=1 00:26:19.204 --rc geninfo_unexecuted_blocks=1 00:26:19.204 00:26:19.204 ' 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:19.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.204 --rc genhtml_branch_coverage=1 00:26:19.204 --rc genhtml_function_coverage=1 00:26:19.204 --rc genhtml_legend=1 00:26:19.204 --rc geninfo_all_blocks=1 00:26:19.204 --rc geninfo_unexecuted_blocks=1 00:26:19.204 00:26:19.204 ' 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:19.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.204 --rc genhtml_branch_coverage=1 00:26:19.204 --rc genhtml_function_coverage=1 00:26:19.204 --rc genhtml_legend=1 00:26:19.204 --rc geninfo_all_blocks=1 00:26:19.204 --rc geninfo_unexecuted_blocks=1 00:26:19.204 00:26:19.204 ' 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.204 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:19.205 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@280 -- # nvmf_veth_init 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@223 -- # create_target_ns 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # create_main_bridge 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@105 -- # delete_main_bridge 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # return 0 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up initiator0 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up target0 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target0 up 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up target0_br 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.205 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # add_to_ns target0 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:26:19.206 10.0.0.1 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:19.206 10.0.0.2 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@66 -- # set_up initiator0 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up target0_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up initiator1 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up target1 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target1 up 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up target1_br 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # add_to_ns target1 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772163 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:26:19.206 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:26:19.207 10.0.0.3 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772164 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:26:19.207 10.0.0.4 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@66 -- # set_up initiator1 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:26:19.207 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up target1_br 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 2 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator0 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:19.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:26:19.466 00:26:19.466 --- 10.0.0.1 ping statistics --- 00:26:19.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.466 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target0 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target0 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:19.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:26:19.466 00:26:19.466 --- 10.0.0.2 ping statistics --- 00:26:19.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.466 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:26:19.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:19.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:26:19.466 00:26:19.466 --- 10.0.0.3 ping statistics --- 00:26:19.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.466 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target1 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:19.466 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:26:19.467 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:19.467 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.122 ms 00:26:19.467 00:26:19.467 --- 10.0.0.4 ping statistics --- 00:26:19.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.467 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # return 0 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator0 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target0 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target0 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target1 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:19.467 ' 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=73721 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 73721 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 73721 ']' 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:19.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:19.467 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:19.467 [2024-11-20 07:23:43.570881] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:26:19.467 [2024-11-20 07:23:43.570942] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.725 [2024-11-20 07:23:43.710480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:19.725 [2024-11-20 07:23:43.746420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.725 [2024-11-20 07:23:43.746460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.725 [2024-11-20 07:23:43.746466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.725 [2024-11-20 07:23:43.746471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.725 [2024-11-20 07:23:43.746476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.725 [2024-11-20 07:23:43.747129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.725 [2024-11-20 07:23:43.747237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.725 [2024-11-20 07:23:43.747258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.725 [2024-11-20 07:23:43.778256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:20.290 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.290 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:20.290 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:20.290 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:20.290 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:20.290 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.290 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:20.547 [2024-11-20 07:23:44.623685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.547 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:20.805 Malloc0 00:26:20.805 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:21.063 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:21.321 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:21.321 [2024-11-20 07:23:45.452622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.321 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:21.579 [2024-11-20 07:23:45.660762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:21.579 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:21.837 [2024-11-20 07:23:45.864918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:21.837 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=73773 00:26:21.837 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:21.837 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 73773 /var/tmp/bdevperf.sock 00:26:21.837 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:21.837 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 73773 ']' 00:26:21.837 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:21.837 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:21.837 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:21.837 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.837 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:22.801 07:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.801 07:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:22.801 07:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:23.059 NVMe0n1 00:26:23.059 07:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:23.317 00:26:23.317 07:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=73791 00:26:23.317 07:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:23.317 07:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:24.251 07:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.510 [2024-11-20 07:23:48.533637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.510 [2024-11-20 07:23:48.533677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.510 [2024-11-20 07:23:48.533682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.510 [2024-11-20 07:23:48.533686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.511 [2024-11-20 07:23:48.533987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.533990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.533994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.533997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 [2024-11-20 07:23:48.534099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edcf0 is same with the state(6) to be set 00:26:24.512 07:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:27.797 07:23:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:27.797 00:26:27.797 07:23:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:27.797 07:23:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:31.139 07:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.139 [2024-11-20 07:23:55.140463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.139 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:32.071 07:23:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:32.329 07:23:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 73791 00:26:38.888 { 00:26:38.888 "results": [ 00:26:38.888 { 00:26:38.888 "job": "NVMe0n1", 00:26:38.888 "core_mask": "0x1", 00:26:38.888 "workload": "verify", 00:26:38.888 "status": "finished", 00:26:38.888 "verify_range": { 00:26:38.888 "start": 0, 00:26:38.888 "length": 16384 00:26:38.888 }, 00:26:38.888 "queue_depth": 128, 00:26:38.888 "io_size": 4096, 00:26:38.888 "runtime": 15.008357, 00:26:38.888 "iops": 12155.760953713987, 00:26:38.888 "mibps": 47.48344122544526, 00:26:38.888 "io_failed": 4197, 00:26:38.888 "io_timeout": 0, 00:26:38.888 "avg_latency_us": 10270.357168162456, 00:26:38.888 "min_latency_us": 431.6553846153846, 00:26:38.888 "max_latency_us": 14720.393846153846 00:26:38.888 } 00:26:38.888 ], 00:26:38.888 "core_count": 1 00:26:38.888 } 00:26:38.888 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 73773 00:26:38.888 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 73773 ']' 00:26:38.888 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 73773 00:26:38.888 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:38.888 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.888 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73773 00:26:38.888 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:38.888 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:38.888 killing process with pid 73773 00:26:38.888 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73773' 00:26:38.888 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 73773 00:26:38.888 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 73773 00:26:38.888 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:38.888 [2024-11-20 07:23:45.910288] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:26:38.888 [2024-11-20 07:23:45.910363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73773 ] 00:26:38.888 [2024-11-20 07:23:46.050017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.888 [2024-11-20 07:23:46.086872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.888 [2024-11-20 07:23:46.117272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:38.888 Running I/O for 15 seconds... 00:26:38.888 8341.00 IOPS, 32.58 MiB/s [2024-11-20T07:24:03.091Z] [2024-11-20 07:23:48.534398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.888 [2024-11-20 07:23:48.534743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.888 [2024-11-20 07:23:48.534754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.534763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.534774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.534787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.534798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.534807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.534817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.534826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.534836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.534845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.534856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.534864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.534875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.534884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.534895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.534903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.534919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.534928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.534938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.534947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.534958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.534966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.534977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.534986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.534997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.889 [2024-11-20 07:23:48.535533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.889 [2024-11-20 07:23:48.535544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.535980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.535989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 07:23:48.536322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.890 [2024-11-20 07:23:48.536331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.536666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 07:23:48.536984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.536995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.891 [2024-11-20 07:23:48.537004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.537013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f5070 is same with the state(6) to be set 00:26:38.891 [2024-11-20 07:23:48.537024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:38.891 [2024-11-20 07:23:48.537030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:38.891 [2024-11-20 07:23:48.537041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74912 len:8 PRP1 0x0 PRP2 0x0 00:26:38.891 [2024-11-20 07:23:48.537050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.537090] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:38.891 [2024-11-20 07:23:48.537131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.891 [2024-11-20 07:23:48.537142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.537152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.891 [2024-11-20 07:23:48.537161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.537170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.891 [2024-11-20 07:23:48.537178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 07:23:48.537188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.891 [2024-11-20 07:23:48.537197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:48.537205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:38.892 [2024-11-20 07:23:48.537247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215a710 (9): Bad file descriptor 00:26:38.892 [2024-11-20 07:23:48.540567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:38.892 [2024-11-20 07:23:48.569941] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:38.892 10089.00 IOPS, 39.41 MiB/s [2024-11-20T07:24:03.095Z] 11022.00 IOPS, 43.05 MiB/s [2024-11-20T07:24:03.095Z] 11494.50 IOPS, 44.90 MiB/s [2024-11-20T07:24:03.095Z] [2024-11-20 07:23:51.928627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.928687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.928732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.928748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.928764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.928779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.928794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.928810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.928825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.892 [2024-11-20 07:23:51.928840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.892 [2024-11-20 07:23:51.928856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.892 [2024-11-20 07:23:51.928871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.892 [2024-11-20 07:23:51.928886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.892 [2024-11-20 07:23:51.928902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.892 [2024-11-20 07:23:51.928921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.892 [2024-11-20 07:23:51.928937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.892 [2024-11-20 07:23:51.928953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.928968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.928986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.928995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.929002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.929010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.929017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.929025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.929032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.929041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.929048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.929057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.929063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.929072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.929079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.929087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.929094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.929102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.929109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.929121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.929128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.929136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.929143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.929151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.929158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.929166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.892 [2024-11-20 07:23:51.929173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.892 [2024-11-20 07:23:51.929181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.893 [2024-11-20 07:23:51.929711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 07:23:51.929791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 07:23:51.929800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.929807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.929816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.929822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.929831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.929838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.929846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.929853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.929861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.929868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.929877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.929884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.929893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.929900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.929908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.929917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.929926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.929933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.929941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.929948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.929957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.929964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.929972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.929979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.929987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.929994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.930010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.930025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.930040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.930055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.930071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.930086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.894 [2024-11-20 07:23:51.930343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.930359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.930374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.930389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.930405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.894 [2024-11-20 07:23:51.930413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.894 [2024-11-20 07:23:51.930420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:51.930435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:51.930450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:51.930465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.895 [2024-11-20 07:23:51.930480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.895 [2024-11-20 07:23:51.930495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.895 [2024-11-20 07:23:51.930513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.895 [2024-11-20 07:23:51.930528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.895 [2024-11-20 07:23:51.930544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.895 [2024-11-20 07:23:51.930559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.895 [2024-11-20 07:23:51.930574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.895 [2024-11-20 07:23:51.930589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:51.930605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:51.930620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:51.930636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:51.930651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:51.930667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:51.930682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:51.930697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f91d0 is same with the state(6) to be set 00:26:38.895 [2024-11-20 07:23:51.930717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:38.895 [2024-11-20 07:23:51.930722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:38.895 [2024-11-20 07:23:51.930728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36464 len:8 PRP1 0x0 PRP2 0x0 00:26:38.895 [2024-11-20 07:23:51.930735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930770] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:38.895 [2024-11-20 07:23:51.930804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.895 [2024-11-20 07:23:51.930814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.895 [2024-11-20 07:23:51.930829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.895 [2024-11-20 07:23:51.930844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.895 [2024-11-20 07:23:51.930859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:51.930866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:38.895 [2024-11-20 07:23:51.933475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:38.895 [2024-11-20 07:23:51.933503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215a710 (9): Bad file descriptor 00:26:38.895 [2024-11-20 07:23:51.956407] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:38.895 11655.80 IOPS, 45.53 MiB/s [2024-11-20T07:24:03.098Z] 11829.17 IOPS, 46.21 MiB/s [2024-11-20T07:24:03.098Z] 11954.14 IOPS, 46.70 MiB/s [2024-11-20T07:24:03.098Z] 12036.88 IOPS, 47.02 MiB/s [2024-11-20T07:24:03.098Z] [2024-11-20 07:23:56.351320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:56.351372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:56.351387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:56.351395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:56.351404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:56.351412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:56.351421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:56.351428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:56.351458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:56.351465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:56.351473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:56.351481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:56.351489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:56.351496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:56.351505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.895 [2024-11-20 07:23:56.351512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:56.351521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.895 [2024-11-20 07:23:56.351528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:56.351536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.895 [2024-11-20 07:23:56.351543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:56.351552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.895 [2024-11-20 07:23:56.351559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:56.351567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.895 [2024-11-20 07:23:56.351574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.895 [2024-11-20 07:23:56.351583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.351787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.351804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.351820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.351836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.351852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.351871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.351887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.351903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.351989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.351996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.352012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.896 [2024-11-20 07:23:56.352027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.352043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.352058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.352087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.352103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.352119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.352134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.352149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.352165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.352182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.352198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.896 [2024-11-20 07:23:56.352214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.896 [2024-11-20 07:23:56.352231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.897 [2024-11-20 07:23:56.352239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.897 [2024-11-20 07:23:56.352254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.897 [2024-11-20 07:23:56.352270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.897 [2024-11-20 07:23:56.352291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.897 [2024-11-20 07:23:56.352307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.897 [2024-11-20 07:23:56.352693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.897 [2024-11-20 07:23:56.352713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.897 [2024-11-20 07:23:56.352729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.897 [2024-11-20 07:23:56.352745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.897 [2024-11-20 07:23:56.352761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.897 [2024-11-20 07:23:56.352776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.897 [2024-11-20 07:23:56.352792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.897 [2024-11-20 07:23:56.352804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.352812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.352820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.352827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.352836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.352843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.352851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.352858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.352867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.352874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.352882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.352889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.352897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.352908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.352917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.352923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.352932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.352939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.352947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.352954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.352963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.352970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.352979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.352986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.352994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.353001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.353017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.353032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.353048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.353065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.898 [2024-11-20 07:23:56.353081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.898 [2024-11-20 07:23:56.353435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.898 [2024-11-20 07:23:56.353442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.899 [2024-11-20 07:23:56.353450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.899 [2024-11-20 07:23:56.353457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.899 [2024-11-20 07:23:56.353496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:38.899 [2024-11-20 07:23:56.353502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:38.899 [2024-11-20 07:23:56.353508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98400 len:8 PRP1 0x0 PRP2 0x0 00:26:38.899 [2024-11-20 07:23:56.353516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.899 [2024-11-20 07:23:56.353555] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:38.899 [2024-11-20 07:23:56.353591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.899 [2024-11-20 07:23:56.353600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.899 [2024-11-20 07:23:56.353614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.899 [2024-11-20 07:23:56.353621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.899 [2024-11-20 07:23:56.353629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.899 [2024-11-20 07:23:56.353636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.899 [2024-11-20 07:23:56.353644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.899 [2024-11-20 07:23:56.353651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.899 [2024-11-20 07:23:56.353659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:38.899 [2024-11-20 07:23:56.356337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:38.899 [2024-11-20 07:23:56.356365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215a710 (9): Bad file descriptor 00:26:38.899 [2024-11-20 07:23:56.379711] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:38.899 12036.22 IOPS, 47.02 MiB/s [2024-11-20T07:24:03.102Z] 12167.00 IOPS, 47.53 MiB/s [2024-11-20T07:24:03.102Z] 12277.18 IOPS, 47.96 MiB/s [2024-11-20T07:24:03.102Z] 12368.50 IOPS, 48.31 MiB/s [2024-11-20T07:24:03.102Z] 12442.31 IOPS, 48.60 MiB/s [2024-11-20T07:24:03.102Z] 12289.71 IOPS, 48.01 MiB/s [2024-11-20T07:24:03.102Z] 12156.13 IOPS, 47.48 MiB/s 00:26:38.899 Latency(us) 00:26:38.899 [2024-11-20T07:24:03.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.899 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:38.899 Verification LBA range: start 0x0 length 0x4000 00:26:38.899 NVMe0n1 : 15.01 12155.76 47.48 279.64 0.00 10270.36 431.66 14720.39 00:26:38.899 [2024-11-20T07:24:03.102Z] =================================================================================================================== 00:26:38.899 [2024-11-20T07:24:03.102Z] Total : 12155.76 47.48 279.64 0.00 10270.36 431.66 14720.39 00:26:38.899 Received shutdown signal, test time was about 15.000000 seconds 00:26:38.899 00:26:38.899 Latency(us) 00:26:38.899 [2024-11-20T07:24:03.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.899 [2024-11-20T07:24:03.102Z] =================================================================================================================== 00:26:38.899 [2024-11-20T07:24:03.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.899 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:38.899 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:38.899 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:38.899 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=73970 00:26:38.899 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 73970 /var/tmp/bdevperf.sock 00:26:38.899 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 73970 ']' 00:26:38.899 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:38.899 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:38.899 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:38.899 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:38.899 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.899 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:39.465 07:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.465 07:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:39.465 07:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:39.722 [2024-11-20 07:24:03.679612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:39.722 07:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:39.722 [2024-11-20 07:24:03.903776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:39.722 07:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:39.981 NVMe0n1 00:26:40.239 07:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:40.496 00:26:40.496 07:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:40.754 00:26:40.754 07:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:40.754 07:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:40.754 07:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:41.013 07:24:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:44.294 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:44.294 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:44.294 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=74052 00:26:44.294 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 74052 00:26:44.294 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:45.664 { 00:26:45.664 "results": [ 00:26:45.664 { 00:26:45.664 "job": "NVMe0n1", 00:26:45.664 "core_mask": "0x1", 00:26:45.664 "workload": "verify", 00:26:45.664 "status": "finished", 00:26:45.664 "verify_range": { 00:26:45.664 "start": 0, 00:26:45.664 "length": 16384 00:26:45.664 }, 00:26:45.664 "queue_depth": 128, 00:26:45.664 "io_size": 4096, 00:26:45.664 "runtime": 1.004063, 00:26:45.664 "iops": 9979.453480508693, 00:26:45.664 "mibps": 38.98224015823708, 00:26:45.664 "io_failed": 0, 00:26:45.664 "io_timeout": 0, 00:26:45.664 "avg_latency_us": 12777.275627207124, 00:26:45.664 "min_latency_us": 793.9938461538461, 00:26:45.664 "max_latency_us": 10838.646153846154 00:26:45.664 } 00:26:45.664 ], 00:26:45.664 "core_count": 1 00:26:45.664 } 00:26:45.664 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:45.664 [2024-11-20 07:24:02.631711] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:26:45.664 [2024-11-20 07:24:02.631776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73970 ] 00:26:45.664 [2024-11-20 07:24:02.777477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.664 [2024-11-20 07:24:02.809375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.664 [2024-11-20 07:24:02.838261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:45.664 [2024-11-20 07:24:05.173928] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:45.664 [2024-11-20 07:24:05.174020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.664 [2024-11-20 07:24:05.174033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.664 [2024-11-20 07:24:05.174043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.664 [2024-11-20 07:24:05.174051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.664 [2024-11-20 07:24:05.174058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.664 [2024-11-20 07:24:05.174065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.664 [2024-11-20 07:24:05.174072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.664 [2024-11-20 07:24:05.174079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.664 [2024-11-20 07:24:05.174086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:45.664 [2024-11-20 07:24:05.174118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:45.664 [2024-11-20 07:24:05.174133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b9710 (9): Bad file descriptor 00:26:45.664 [2024-11-20 07:24:05.182403] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:45.664 Running I/O for 1 seconds... 00:26:45.664 9892.00 IOPS, 38.64 MiB/s 00:26:45.664 Latency(us) 00:26:45.664 [2024-11-20T07:24:09.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.664 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:45.664 Verification LBA range: start 0x0 length 0x4000 00:26:45.664 NVMe0n1 : 1.00 9979.45 38.98 0.00 0.00 12777.28 793.99 10838.65 00:26:45.664 [2024-11-20T07:24:09.867Z] =================================================================================================================== 00:26:45.664 [2024-11-20T07:24:09.867Z] Total : 9979.45 38.98 0.00 0.00 12777.28 793.99 10838.65 00:26:45.664 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:45.664 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:45.664 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:45.921 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:45.921 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:46.179 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:46.179 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 73970 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 73970 ']' 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 73970 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73970 00:26:49.458 killing process with pid 73970 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73970' 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 73970 00:26:49.458 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 73970 00:26:49.716 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:49.716 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:49.716 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:49.716 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:49.716 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:49.716 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:49.716 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:26:49.716 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:49.716 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:26:49.716 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:49.716 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:49.716 rmmod nvme_tcp 00:26:49.716 rmmod nvme_fabrics 00:26:49.716 rmmod nvme_keyring 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 73721 ']' 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 73721 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 73721 ']' 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 73721 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73721 00:26:49.977 killing process with pid 73721 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73721' 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 73721 00:26:49.977 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 73721 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@254 -- # local dev 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:49.977 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # continue 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # continue 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@274 -- # iptr 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-save 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-restore 00:26:50.237 00:26:50.237 real 0m31.205s 00:26:50.237 user 2m1.061s 00:26:50.237 sys 0m4.202s 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:50.237 ************************************ 00:26:50.237 END TEST nvmf_failover 00:26:50.237 ************************************ 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.237 ************************************ 00:26:50.237 START TEST nvmf_host_discovery 00:26:50.237 ************************************ 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:50.237 * Looking for test storage... 00:26:50.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:50.237 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:50.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.238 --rc genhtml_branch_coverage=1 00:26:50.238 --rc genhtml_function_coverage=1 00:26:50.238 --rc genhtml_legend=1 00:26:50.238 --rc geninfo_all_blocks=1 00:26:50.238 --rc geninfo_unexecuted_blocks=1 00:26:50.238 00:26:50.238 ' 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:50.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.238 --rc genhtml_branch_coverage=1 00:26:50.238 --rc genhtml_function_coverage=1 00:26:50.238 --rc genhtml_legend=1 00:26:50.238 --rc geninfo_all_blocks=1 00:26:50.238 --rc geninfo_unexecuted_blocks=1 00:26:50.238 00:26:50.238 ' 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:50.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.238 --rc genhtml_branch_coverage=1 00:26:50.238 --rc genhtml_function_coverage=1 00:26:50.238 --rc genhtml_legend=1 00:26:50.238 --rc geninfo_all_blocks=1 00:26:50.238 --rc geninfo_unexecuted_blocks=1 00:26:50.238 00:26:50.238 ' 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:50.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.238 --rc genhtml_branch_coverage=1 00:26:50.238 --rc genhtml_function_coverage=1 00:26:50.238 --rc genhtml_legend=1 00:26:50.238 --rc geninfo_all_blocks=1 00:26:50.238 --rc geninfo_unexecuted_blocks=1 00:26:50.238 00:26:50.238 ' 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # : 0 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:50.238 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:50.238 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@280 -- # nvmf_veth_init 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@223 -- # create_target_ns 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:50.239 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # create_main_bridge 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@105 -- # delete_main_bridge 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # return 0 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up initiator0 00:26:50.501 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up target0 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target0 up 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up target0_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # add_to_ns target0 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:50.502 10.0.0.1 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:26:50.502 10.0.0.2 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@66 -- # set_up initiator0 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up target0_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up initiator1 00:26:50.502 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up target1 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target1 up 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up target1_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # add_to_ns target1 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772163 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:26:50.503 10.0.0.3 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772164 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:26:50.503 10.0.0.4 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@66 -- # set_up initiator1 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up target1_br 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:50.503 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@38 -- # ping_ips 2 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:50.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:26:50.504 00:26:50.504 --- 10.0.0.1 ping statistics --- 00:26:50.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.504 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target0 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:50.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.016 ms 00:26:50.504 00:26:50.504 --- 10.0.0.2 ping statistics --- 00:26:50.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.504 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:26:50.504 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:50.504 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:26:50.504 00:26:50.504 --- 10.0.0.3 ping statistics --- 00:26:50.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.504 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:26:50.504 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:26:50.505 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:50.505 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:26:50.505 00:26:50.505 --- 10.0.0.4 ping statistics --- 00:26:50.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.505 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # return 0 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:50.505 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target0 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:50.767 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target1 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:50.768 ' 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # nvmfpid=74373 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # waitforlisten 74373 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 74373 ']' 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.768 07:24:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.768 [2024-11-20 07:24:14.796843] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:26:50.768 [2024-11-20 07:24:14.796897] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.768 [2024-11-20 07:24:14.938371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.030 [2024-11-20 07:24:14.972696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.030 [2024-11-20 07:24:14.972747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.030 [2024-11-20 07:24:14.972755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.030 [2024-11-20 07:24:14.972761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.030 [2024-11-20 07:24:14.972766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.030 [2024-11-20 07:24:14.973029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.030 [2024-11-20 07:24:15.002386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:51.030 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.030 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:51.030 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:51.030 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:51.030 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.030 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.030 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:51.030 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.030 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.030 [2024-11-20 07:24:15.076878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.031 [2024-11-20 07:24:15.084945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.031 null0 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.031 null1 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=74398 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 74398 /tmp/host.sock 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 74398 ']' 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.031 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.031 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.031 [2024-11-20 07:24:15.146746] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:26:51.031 [2024-11-20 07:24:15.146798] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74398 ] 00:26:51.289 [2024-11-20 07:24:15.293436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.289 [2024-11-20 07:24:15.330833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.289 [2024-11-20 07:24:15.361809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.289 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:51.290 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.290 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.548 [2024-11-20 07:24:15.649076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:51.548 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:51.549 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.549 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.549 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:51.549 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.549 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:51.549 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:51.549 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:51.549 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:51.549 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:51.549 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.549 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:51.806 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:52.372 [2024-11-20 07:24:16.429429] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:52.372 [2024-11-20 07:24:16.429462] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:52.373 [2024-11-20 07:24:16.429476] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.373 [2024-11-20 07:24:16.435462] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:52.373 [2024-11-20 07:24:16.489768] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:52.373 [2024-11-20 07:24:16.490586] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1113e60:1 started. 00:26:52.373 [2024-11-20 07:24:16.492142] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:52.373 [2024-11-20 07:24:16.492165] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:52.373 [2024-11-20 07:24:16.498193] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1113e60 was disconnected and freed. delete nvme_qpair. 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.631 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:52.890 [2024-11-20 07:24:16.931085] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1122000:1 started. 00:26:52.890 [2024-11-20 07:24:16.938345] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1122000 was disconnected and freed. delete nvme_qpair. 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:52.890 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.891 [2024-11-20 07:24:16.994049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:52.891 [2024-11-20 07:24:16.994473] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:52.891 [2024-11-20 07:24:16.994504] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:52.891 [2024-11-20 07:24:17.000471] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:52.891 07:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.891 [2024-11-20 07:24:17.062804] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:52.891 [2024-11-20 07:24:17.062852] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:52.891 [2024-11-20 07:24:17.062860] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:52.891 [2024-11-20 07:24:17.062863] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:52.891 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.150 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.151 [2024-11-20 07:24:17.138595] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:53.151 [2024-11-20 07:24:17.138622] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:53.151 [2024-11-20 07:24:17.139869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:53.151 [2024-11-20 07:24:17.139897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.151 [2024-11-20 07:24:17.139905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:53.151 [2024-11-20 07:24:17.139910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.151 [2024-11-20 07:24:17.139917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:53.151 [2024-11-20 07:24:17.139923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.151 [2024-11-20 07:24:17.139930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:53.151 [2024-11-20 07:24:17.139935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.151 [2024-11-20 07:24:17.139941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0230 is same with the state(6) to be set 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:53.151 [2024-11-20 07:24:17.144601] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:53.151 [2024-11-20 07:24:17.144627] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:53.151 [2024-11-20 07:24:17.144667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f0230 (9): Bad file descriptor 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:53.151 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:53.152 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.410 07:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.342 [2024-11-20 07:24:18.419759] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:54.342 [2024-11-20 07:24:18.419784] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:54.342 [2024-11-20 07:24:18.419795] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:54.342 [2024-11-20 07:24:18.425782] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:54.342 [2024-11-20 07:24:18.483993] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:54.342 [2024-11-20 07:24:18.484536] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x11205e0:1 started. 00:26:54.342 [2024-11-20 07:24:18.486023] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:54.342 [2024-11-20 07:24:18.486057] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:54.342 [2024-11-20 07:24:18.488734] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x11205e0 was disconnected and freed. delete nvme_qpair. 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.342 request: 00:26:54.342 { 00:26:54.342 "name": "nvme", 00:26:54.342 "trtype": "tcp", 00:26:54.342 "traddr": "10.0.0.2", 00:26:54.342 "adrfam": "ipv4", 00:26:54.342 "trsvcid": "8009", 00:26:54.342 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:54.342 "wait_for_attach": true, 00:26:54.342 "method": "bdev_nvme_start_discovery", 00:26:54.342 "req_id": 1 00:26:54.342 } 00:26:54.342 Got JSON-RPC error response 00:26:54.342 response: 00:26:54.342 { 00:26:54.342 "code": -17, 00:26:54.342 "message": "File exists" 00:26:54.342 } 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.342 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.343 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:54.343 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:54.343 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:54.343 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.600 request: 00:26:54.600 { 00:26:54.600 "name": "nvme_second", 00:26:54.600 "trtype": "tcp", 00:26:54.600 "traddr": "10.0.0.2", 00:26:54.600 "adrfam": "ipv4", 00:26:54.600 "trsvcid": "8009", 00:26:54.600 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:54.600 "wait_for_attach": true, 00:26:54.600 "method": "bdev_nvme_start_discovery", 00:26:54.600 "req_id": 1 00:26:54.600 } 00:26:54.600 Got JSON-RPC error response 00:26:54.600 response: 00:26:54.600 { 00:26:54.600 "code": -17, 00:26:54.600 "message": "File exists" 00:26:54.600 } 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:54.600 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.601 07:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.532 [2024-11-20 07:24:19.667007] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.532 [2024-11-20 07:24:19.667060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1113c80 with addr=10.0.0.2, port=8010 00:26:55.532 [2024-11-20 07:24:19.667074] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:55.532 [2024-11-20 07:24:19.667080] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:55.532 [2024-11-20 07:24:19.667085] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:56.901 [2024-11-20 07:24:20.667001] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.901 [2024-11-20 07:24:20.667047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1113c80 with addr=10.0.0.2, port=8010 00:26:56.901 [2024-11-20 07:24:20.667060] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:56.901 [2024-11-20 07:24:20.667066] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:56.901 [2024-11-20 07:24:20.667071] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:57.524 [2024-11-20 07:24:21.666918] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:57.524 request: 00:26:57.524 { 00:26:57.524 "name": "nvme_second", 00:26:57.524 "trtype": "tcp", 00:26:57.524 "traddr": "10.0.0.2", 00:26:57.524 "adrfam": "ipv4", 00:26:57.524 "trsvcid": "8010", 00:26:57.524 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:57.524 "wait_for_attach": false, 00:26:57.524 "attach_timeout_ms": 3000, 00:26:57.524 "method": "bdev_nvme_start_discovery", 00:26:57.524 "req_id": 1 00:26:57.524 } 00:26:57.524 Got JSON-RPC error response 00:26:57.524 response: 00:26:57.524 { 00:26:57.524 "code": -110, 00:26:57.524 "message": "Connection timed out" 00:26:57.524 } 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 74398 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:57.524 07:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@99 -- # sync 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@102 -- # set +e 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:01.708 rmmod nvme_tcp 00:27:01.708 rmmod nvme_fabrics 00:27:01.708 rmmod nvme_keyring 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@106 -- # set -e 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@107 -- # return 0 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # '[' -n 74373 ']' 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # killprocess 74373 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 74373 ']' 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 74373 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74373 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:01.708 killing process with pid 74373 00:27:01.708 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74373' 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 74373 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 74373 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@254 -- # local dev 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:27:01.709 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # continue 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # continue 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@274 -- # iptr 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-save 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:27:01.968 00:27:01.968 real 0m11.665s 00:27:01.968 user 0m18.472s 00:27:01.968 sys 0m1.460s 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.968 ************************************ 00:27:01.968 END TEST nvmf_host_discovery 00:27:01.968 ************************************ 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.968 ************************************ 00:27:01.968 START TEST nvmf_host_multipath_status 00:27:01.968 ************************************ 00:27:01.968 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:01.968 * Looking for test storage... 00:27:01.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:01.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.968 --rc genhtml_branch_coverage=1 00:27:01.968 --rc genhtml_function_coverage=1 00:27:01.968 --rc genhtml_legend=1 00:27:01.968 --rc geninfo_all_blocks=1 00:27:01.968 --rc geninfo_unexecuted_blocks=1 00:27:01.968 00:27:01.968 ' 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:01.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.968 --rc genhtml_branch_coverage=1 00:27:01.968 --rc genhtml_function_coverage=1 00:27:01.968 --rc genhtml_legend=1 00:27:01.968 --rc geninfo_all_blocks=1 00:27:01.968 --rc geninfo_unexecuted_blocks=1 00:27:01.968 00:27:01.968 ' 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:01.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.968 --rc genhtml_branch_coverage=1 00:27:01.968 --rc genhtml_function_coverage=1 00:27:01.968 --rc genhtml_legend=1 00:27:01.968 --rc geninfo_all_blocks=1 00:27:01.968 --rc geninfo_unexecuted_blocks=1 00:27:01.968 00:27:01.968 ' 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:01.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.968 --rc genhtml_branch_coverage=1 00:27:01.968 --rc genhtml_function_coverage=1 00:27:01.968 --rc genhtml_legend=1 00:27:01.968 --rc geninfo_all_blocks=1 00:27:01.968 --rc geninfo_unexecuted_blocks=1 00:27:01.968 00:27:01.968 ' 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.968 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:01.969 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@280 -- # nvmf_veth_init 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@223 -- # create_target_ns 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # create_main_bridge 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@105 -- # delete_main_bridge 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # return 0 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up initiator0 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:27:01.969 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up target0 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target0 up 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up target0_br 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # add_to_ns target0 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:27:01.970 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:27:02.229 10.0.0.1 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:27:02.229 10.0.0.2 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@66 -- # set_up initiator0 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:02.229 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up target0_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up initiator1 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up target1 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target1 up 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up target1_br 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # add_to_ns target1 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772163 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:27:02.230 10.0.0.3 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772164 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:27:02.230 10.0.0.4 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@66 -- # set_up initiator1 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:27:02.230 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up target1_br 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 2 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator0 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:02.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:27:02.231 00:27:02.231 --- 10.0.0.1 ping statistics --- 00:27:02.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.231 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target0 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target0 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:02.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:27:02.231 00:27:02.231 --- 10.0.0.2 ping statistics --- 00:27:02.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.231 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:27:02.231 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:27:02.232 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:02.232 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:27:02.232 00:27:02.232 --- 10.0.0.3 ping statistics --- 00:27:02.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.232 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:27:02.232 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:02.232 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:27:02.232 00:27:02.232 --- 10.0.0.4 ping statistics --- 00:27:02.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.232 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # return 0 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator0 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target0 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target0 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target1 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:02.232 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:02.232 ' 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=74923 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 74923 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 74923 ']' 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.233 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:02.491 [2024-11-20 07:24:26.453475] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:27:02.491 [2024-11-20 07:24:26.453529] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.491 [2024-11-20 07:24:26.594353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:02.491 [2024-11-20 07:24:26.629987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.491 [2024-11-20 07:24:26.630122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.491 [2024-11-20 07:24:26.630183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.491 [2024-11-20 07:24:26.630257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.491 [2024-11-20 07:24:26.630274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.491 [2024-11-20 07:24:26.630926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.491 [2024-11-20 07:24:26.631174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.491 [2024-11-20 07:24:26.661010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:03.422 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:03.422 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:03.422 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:03.422 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:03.422 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:03.423 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.423 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=74923 00:27:03.423 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:03.423 [2024-11-20 07:24:27.528317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.423 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:03.680 Malloc0 00:27:03.680 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:03.938 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:04.195 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:04.195 [2024-11-20 07:24:28.373452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.195 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:04.491 [2024-11-20 07:24:28.577537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:04.491 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=74973 00:27:04.491 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:04.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:04.491 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 74973 /var/tmp/bdevperf.sock 00:27:04.491 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:04.491 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 74973 ']' 00:27:04.491 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:04.491 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.491 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:04.491 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.491 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:05.455 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.455 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:05.455 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:05.714 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:05.975 Nvme0n1 00:27:05.975 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:06.234 Nvme0n1 00:27:06.234 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:06.234 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:08.141 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:08.141 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:08.399 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:08.657 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:09.589 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:09.589 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:09.589 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.589 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:09.848 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.848 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:09.848 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.848 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:09.848 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:09.848 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:09.848 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:09.848 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.105 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.105 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:10.105 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:10.105 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.362 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.362 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:10.362 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.362 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:10.620 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.620 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:10.620 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:10.620 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.879 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.879 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:10.879 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:10.879 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:11.138 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:12.077 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:12.077 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:12.077 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.077 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:12.335 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:12.335 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:12.335 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:12.335 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.592 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.592 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:12.592 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.592 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:12.850 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.850 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:12.850 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.850 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:12.850 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.850 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:12.850 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.850 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:13.108 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.108 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:13.108 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.108 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:13.366 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.366 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:13.366 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:13.625 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:13.885 07:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:14.835 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:14.835 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:14.835 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:14.835 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.092 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.092 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:15.092 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.092 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:15.092 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.092 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:15.092 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.092 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:15.350 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.350 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:15.350 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.350 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:15.608 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.608 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:15.608 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.608 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:15.865 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.865 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:15.865 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.865 07:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.124 07:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.124 07:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:16.124 07:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:16.381 07:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:16.381 07:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:17.798 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:17.798 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:17.798 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.798 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:17.798 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.798 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:17.798 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.798 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:18.077 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:18.077 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:18.077 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.077 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.077 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.077 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:18.077 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.077 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:18.335 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.335 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:18.335 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.335 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:18.592 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.592 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:18.592 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.592 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:18.592 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:18.592 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:18.592 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:18.850 07:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:19.108 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:20.041 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:20.041 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:20.041 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.041 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:20.298 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.298 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:20.298 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:20.298 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.557 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.557 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:20.557 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.557 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:20.815 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.815 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:20.815 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.815 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:20.815 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.815 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:20.815 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.815 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:21.073 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:21.073 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:21.073 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.073 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:21.331 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:21.331 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:21.331 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:21.589 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:21.846 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:22.778 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:22.778 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:22.778 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.778 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:23.036 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.036 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:23.036 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.036 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:23.036 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.036 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:23.036 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.036 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:23.294 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.294 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:23.294 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:23.294 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.551 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.551 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:23.551 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.551 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:23.809 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.809 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:23.809 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.809 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:24.150 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.150 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:24.150 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:24.150 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:24.448 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:24.706 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:25.638 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:25.638 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:25.638 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.638 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:25.895 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.895 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:25.895 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.895 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:26.152 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.152 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:26.152 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:26.152 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.152 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.152 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:26.153 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.153 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:26.411 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.411 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:26.411 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.411 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:26.678 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.678 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:26.678 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:26.678 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.939 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.939 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:26.939 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:27.197 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:27.197 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:28.569 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:28.569 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:28.569 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.569 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:28.569 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:28.569 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:28.569 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.569 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:28.827 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.827 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:28.827 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:28.827 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.827 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.827 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:28.827 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.827 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:29.085 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.085 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:29.085 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.085 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:29.342 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.342 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:29.342 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.342 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:29.342 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.342 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:29.342 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:29.600 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:29.858 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:30.793 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:30.793 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:30.793 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.793 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:31.052 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.052 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:31.052 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.052 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:31.310 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.310 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:31.310 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.310 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:31.569 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.569 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:31.569 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.569 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:31.828 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.828 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:31.828 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.828 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:31.828 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.828 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:31.828 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:31.828 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.086 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.086 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:32.086 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:32.345 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:32.603 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:33.552 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:33.552 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:33.552 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.552 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:33.810 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.810 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:33.810 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:33.810 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.068 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:34.068 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:34.068 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:34.068 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.327 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.327 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:34.327 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.327 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:34.585 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.585 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:34.585 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:34.585 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.585 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.585 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:34.585 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:34.585 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.842 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:34.842 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 74973 00:27:34.842 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 74973 ']' 00:27:34.842 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 74973 00:27:34.843 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:34.843 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:34.843 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74973 00:27:34.843 killing process with pid 74973 00:27:34.843 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:34.843 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:34.843 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74973' 00:27:34.843 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 74973 00:27:34.843 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 74973 00:27:34.843 { 00:27:34.843 "results": [ 00:27:34.843 { 00:27:34.843 "job": "Nvme0n1", 00:27:34.843 "core_mask": "0x4", 00:27:34.843 "workload": "verify", 00:27:34.843 "status": "terminated", 00:27:34.843 "verify_range": { 00:27:34.843 "start": 0, 00:27:34.843 "length": 16384 00:27:34.843 }, 00:27:34.843 "queue_depth": 128, 00:27:34.843 "io_size": 4096, 00:27:34.843 "runtime": 28.656577, 00:27:34.843 "iops": 12173.749851561126, 00:27:34.843 "mibps": 47.55371035766065, 00:27:34.843 "io_failed": 0, 00:27:34.843 "io_timeout": 0, 00:27:34.843 "avg_latency_us": 10495.043013613209, 00:27:34.843 "min_latency_us": 516.7261538461538, 00:27:34.843 "max_latency_us": 3019898.88 00:27:34.843 } 00:27:34.843 ], 00:27:34.843 "core_count": 1 00:27:34.843 } 00:27:35.104 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 74973 00:27:35.104 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:35.104 [2024-11-20 07:24:28.632590] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:27:35.105 [2024-11-20 07:24:28.632658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74973 ] 00:27:35.105 [2024-11-20 07:24:28.763965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.105 [2024-11-20 07:24:28.800358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.105 [2024-11-20 07:24:28.831585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:35.105 Running I/O for 90 seconds... 00:27:35.105 8083.00 IOPS, 31.57 MiB/s [2024-11-20T07:24:59.308Z] 8714.50 IOPS, 34.04 MiB/s [2024-11-20T07:24:59.308Z] 9095.00 IOPS, 35.53 MiB/s [2024-11-20T07:24:59.308Z] 9253.25 IOPS, 36.15 MiB/s [2024-11-20T07:24:59.308Z] 9591.80 IOPS, 37.47 MiB/s [2024-11-20T07:24:59.308Z] 10242.50 IOPS, 40.01 MiB/s [2024-11-20T07:24:59.308Z] 10711.86 IOPS, 41.84 MiB/s [2024-11-20T07:24:59.308Z] 11054.50 IOPS, 43.18 MiB/s [2024-11-20T07:24:59.308Z] 11299.11 IOPS, 44.14 MiB/s [2024-11-20T07:24:59.308Z] 11468.40 IOPS, 44.80 MiB/s [2024-11-20T07:24:59.308Z] 11595.27 IOPS, 45.29 MiB/s [2024-11-20T07:24:59.308Z] 11705.00 IOPS, 45.72 MiB/s [2024-11-20T07:24:59.308Z] [2024-11-20 07:24:42.973716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.973772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.973808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.973818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.973831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.973839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.973852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.973860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.973873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.973880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.973893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.973900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.973913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.973920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.973933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.973940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.973953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.973961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.973988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.973996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.974015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.974036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.974056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.974076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.974097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.105 [2024-11-20 07:24:42.974117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:35.105 [2024-11-20 07:24:42.974472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.105 [2024-11-20 07:24:42.974479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.974511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.974533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.974554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.974575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.974595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.974617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.974638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.974931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.974951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.974972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.974985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.974992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.975013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.975037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.975058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.975078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.106 [2024-11-20 07:24:42.975099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.975119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.975140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.975160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.975182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.975202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.975230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.975250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.975271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:35.106 [2024-11-20 07:24:42.975288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.106 [2024-11-20 07:24:42.975296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.975316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.975347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.975368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.975388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.975409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.975429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.975449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.975469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.975489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.975988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.107 [2024-11-20 07:24:42.975996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.976010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.976018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.976031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.976038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.976051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.976060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.976078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.107 [2024-11-20 07:24:42.976086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:35.107 [2024-11-20 07:24:42.976099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:42.976107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:42.976127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:42.976147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:42.976169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.108 [2024-11-20 07:24:42.976196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.108 [2024-11-20 07:24:42.976217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.108 [2024-11-20 07:24:42.976245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.108 [2024-11-20 07:24:42.976265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.108 [2024-11-20 07:24:42.976285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.108 [2024-11-20 07:24:42.976305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.108 [2024-11-20 07:24:42.976328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.108 [2024-11-20 07:24:42.976854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:42.976882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:42.976909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:42.976939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:42.976966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.976985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:42.976993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.977012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:42.977020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.977039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:42.977047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:42.977072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:42.977081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:35.108 11445.85 IOPS, 44.71 MiB/s [2024-11-20T07:24:59.311Z] 10628.29 IOPS, 41.52 MiB/s [2024-11-20T07:24:59.311Z] 9919.73 IOPS, 38.75 MiB/s [2024-11-20T07:24:59.311Z] 9595.81 IOPS, 37.48 MiB/s [2024-11-20T07:24:59.311Z] 9823.82 IOPS, 38.37 MiB/s [2024-11-20T07:24:59.311Z] 10026.50 IOPS, 39.17 MiB/s [2024-11-20T07:24:59.311Z] 10426.74 IOPS, 40.73 MiB/s [2024-11-20T07:24:59.311Z] 10816.20 IOPS, 42.25 MiB/s [2024-11-20T07:24:59.311Z] 11138.10 IOPS, 43.51 MiB/s [2024-11-20T07:24:59.311Z] 11242.73 IOPS, 43.92 MiB/s [2024-11-20T07:24:59.311Z] 11335.13 IOPS, 44.28 MiB/s [2024-11-20T07:24:59.311Z] 11500.79 IOPS, 44.92 MiB/s [2024-11-20T07:24:59.311Z] 11774.76 IOPS, 46.00 MiB/s [2024-11-20T07:24:59.311Z] 12027.38 IOPS, 46.98 MiB/s [2024-11-20T07:24:59.311Z] [2024-11-20 07:24:56.606415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.108 [2024-11-20 07:24:56.606464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:56.606496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.108 [2024-11-20 07:24:56.606505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:56.606538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:56.606546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:56.606559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:56.606565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:56.606578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:56.606586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:56.606598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:56.606605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:56.606618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:56.606625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:56.606638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.108 [2024-11-20 07:24:56.606645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.108 [2024-11-20 07:24:56.606658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.108 [2024-11-20 07:24:56.606665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.109 [2024-11-20 07:24:56.606677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.109 [2024-11-20 07:24:56.606684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:35.109 [2024-11-20 07:24:56.606696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.109 [2024-11-20 07:24:56.606703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:35.109 [2024-11-20 07:24:56.606716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.109 [2024-11-20 07:24:56.606723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:35.109 [2024-11-20 07:24:56.606735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.109 [2024-11-20 07:24:56.606742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:35.109 [2024-11-20 07:24:56.606754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.109 [2024-11-20 07:24:56.606761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:35.109 [2024-11-20 07:24:56.606778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.109 [2024-11-20 07:24:56.606786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:35.109 [2024-11-20 07:24:56.606799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.109 [2024-11-20 07:24:56.606806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:35.109 [2024-11-20 07:24:56.606818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.109 [2024-11-20 07:24:56.606825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:35.109 [2024-11-20 07:24:56.606840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.109 [2024-11-20 07:24:56.606847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:35.109 [2024-11-20 07:24:56.606860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.109 [2024-11-20 07:24:56.606867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:35.109 [2024-11-20 07:24:56.606879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.109 [2024-11-20 07:24:56.606886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:35.109 [2024-11-20 07:24:56.606898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.110 [2024-11-20 07:24:56.606905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.606917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.110 [2024-11-20 07:24:56.606925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.606937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.110 [2024-11-20 07:24:56.606944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.606956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.110 [2024-11-20 07:24:56.606963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.606975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.110 [2024-11-20 07:24:56.606982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.606994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.110 [2024-11-20 07:24:56.607001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.607013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.110 [2024-11-20 07:24:56.607025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.607037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.110 [2024-11-20 07:24:56.607045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.607057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.110 [2024-11-20 07:24:56.607064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.607077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.110 [2024-11-20 07:24:56.607084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.607097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.110 [2024-11-20 07:24:56.607104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.607116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.110 [2024-11-20 07:24:56.607123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.607136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.110 [2024-11-20 07:24:56.607143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:35.110 [2024-11-20 07:24:56.607156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.111 [2024-11-20 07:24:56.607163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:35.111 [2024-11-20 07:24:56.607176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.111 [2024-11-20 07:24:56.607184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:35.111 [2024-11-20 07:24:56.607197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.111 [2024-11-20 07:24:56.607204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.111 [2024-11-20 07:24:56.607217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.111 [2024-11-20 07:24:56.607235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.111 [2024-11-20 07:24:56.607248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.111 [2024-11-20 07:24:56.607255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:35.111 [2024-11-20 07:24:56.607268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.112 [2024-11-20 07:24:56.607279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.112 [2024-11-20 07:24:56.607292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.112 [2024-11-20 07:24:56.607300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.112 [2024-11-20 07:24:56.607313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.112 [2024-11-20 07:24:56.607321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.112 [2024-11-20 07:24:56.607334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.112 [2024-11-20 07:24:56.607341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.112 [2024-11-20 07:24:56.607353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.112 [2024-11-20 07:24:56.607361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:35.112 [2024-11-20 07:24:56.607373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.112 [2024-11-20 07:24:56.607380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:35.112 [2024-11-20 07:24:56.607393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.112 [2024-11-20 07:24:56.607400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:35.112 [2024-11-20 07:24:56.607413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.112 [2024-11-20 07:24:56.607420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:35.112 [2024-11-20 07:24:56.607432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.112 [2024-11-20 07:24:56.607439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:35.112 [2024-11-20 07:24:56.607451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.112 [2024-11-20 07:24:56.607458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.113 [2024-11-20 07:24:56.607477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.113 [2024-11-20 07:24:56.607498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.113 [2024-11-20 07:24:56.607518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.113 [2024-11-20 07:24:56.607541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.113 [2024-11-20 07:24:56.607561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.113 [2024-11-20 07:24:56.607581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.113 [2024-11-20 07:24:56.607600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.113 [2024-11-20 07:24:56.607620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.113 [2024-11-20 07:24:56.607639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.113 [2024-11-20 07:24:56.607658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.113 [2024-11-20 07:24:56.607678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.113 [2024-11-20 07:24:56.607697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:35.113 [2024-11-20 07:24:56.607710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.114 [2024-11-20 07:24:56.607717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:35.114 [2024-11-20 07:24:56.607729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.114 [2024-11-20 07:24:56.607736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:35.114 [2024-11-20 07:24:56.607749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.114 [2024-11-20 07:24:56.607756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:35.114 [2024-11-20 07:24:56.607772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.114 [2024-11-20 07:24:56.607779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:35.114 [2024-11-20 07:24:56.607792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.114 [2024-11-20 07:24:56.607799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:35.114 [2024-11-20 07:24:56.608698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.114 [2024-11-20 07:24:56.608719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:35.114 [2024-11-20 07:24:56.608734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.114 [2024-11-20 07:24:56.608741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.114 [2024-11-20 07:24:56.608754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.114 [2024-11-20 07:24:56.608762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.114 [2024-11-20 07:24:56.608774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.114 [2024-11-20 07:24:56.608782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.114 [2024-11-20 07:24:56.608794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.115 [2024-11-20 07:24:56.608802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:35.115 [2024-11-20 07:24:56.608814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.115 [2024-11-20 07:24:56.608821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:35.115 [2024-11-20 07:24:56.608834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.115 [2024-11-20 07:24:56.608841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:35.115 [2024-11-20 07:24:56.608854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.115 [2024-11-20 07:24:56.608860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:35.115 [2024-11-20 07:24:56.608873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.115 [2024-11-20 07:24:56.608880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:35.115 [2024-11-20 07:24:56.608892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.115 [2024-11-20 07:24:56.608899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:35.115 [2024-11-20 07:24:56.608911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.115 [2024-11-20 07:24:56.608926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:35.115 [2024-11-20 07:24:56.608939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.115 [2024-11-20 07:24:56.608946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:35.115 [2024-11-20 07:24:56.608967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.115 [2024-11-20 07:24:56.608975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:35.115 [2024-11-20 07:24:56.608988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.115 [2024-11-20 07:24:56.608995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:35.115 [2024-11-20 07:24:56.609007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.115 [2024-11-20 07:24:56.609014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:35.115 [2024-11-20 07:24:56.609027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.115 [2024-11-20 07:24:56.609034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:35.116 12118.30 IOPS, 47.34 MiB/s [2024-11-20T07:24:59.319Z] 12153.50 IOPS, 47.47 MiB/s [2024-11-20T07:24:59.319Z] Received shutdown signal, test time was about 28.657230 seconds 00:27:35.116 00:27:35.116 Latency(us) 00:27:35.116 [2024-11-20T07:24:59.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.116 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:35.116 Verification LBA range: start 0x0 length 0x4000 00:27:35.116 Nvme0n1 : 28.66 12173.75 47.55 0.00 0.00 10495.04 516.73 3019898.88 00:27:35.116 [2024-11-20T07:24:59.319Z] =================================================================================================================== 00:27:35.116 [2024-11-20T07:24:59.319Z] Total : 12173.75 47.55 0.00 0.00 10495.04 516.73 3019898.88 00:27:35.116 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:35.116 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:35.116 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:35.116 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:35.116 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:35.116 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:35.384 rmmod nvme_tcp 00:27:35.384 rmmod nvme_fabrics 00:27:35.384 rmmod nvme_keyring 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 74923 ']' 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 74923 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 74923 ']' 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 74923 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74923 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:35.384 killing process with pid 74923 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74923' 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 74923 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 74923 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@254 -- # local dev 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:27:35.384 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # continue 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # continue 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@274 -- # iptr 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-restore 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-save 00:27:35.645 ************************************ 00:27:35.645 END TEST nvmf_host_multipath_status 00:27:35.645 ************************************ 00:27:35.645 00:27:35.645 real 0m33.694s 00:27:35.645 user 1m48.311s 00:27:35.645 sys 0m8.233s 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.645 ************************************ 00:27:35.645 START TEST nvmf_discovery_remove_ifc 00:27:35.645 ************************************ 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:35.645 * Looking for test storage... 00:27:35.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:35.645 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.646 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.646 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:35.646 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:35.646 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.646 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:35.646 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.646 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:35.908 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:35.908 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.908 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:35.908 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.908 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.908 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:35.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.909 --rc genhtml_branch_coverage=1 00:27:35.909 --rc genhtml_function_coverage=1 00:27:35.909 --rc genhtml_legend=1 00:27:35.909 --rc geninfo_all_blocks=1 00:27:35.909 --rc geninfo_unexecuted_blocks=1 00:27:35.909 00:27:35.909 ' 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:35.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.909 --rc genhtml_branch_coverage=1 00:27:35.909 --rc genhtml_function_coverage=1 00:27:35.909 --rc genhtml_legend=1 00:27:35.909 --rc geninfo_all_blocks=1 00:27:35.909 --rc geninfo_unexecuted_blocks=1 00:27:35.909 00:27:35.909 ' 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:35.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.909 --rc genhtml_branch_coverage=1 00:27:35.909 --rc genhtml_function_coverage=1 00:27:35.909 --rc genhtml_legend=1 00:27:35.909 --rc geninfo_all_blocks=1 00:27:35.909 --rc geninfo_unexecuted_blocks=1 00:27:35.909 00:27:35.909 ' 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:35.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.909 --rc genhtml_branch_coverage=1 00:27:35.909 --rc genhtml_function_coverage=1 00:27:35.909 --rc genhtml_legend=1 00:27:35.909 --rc geninfo_all_blocks=1 00:27:35.909 --rc geninfo_unexecuted_blocks=1 00:27:35.909 00:27:35.909 ' 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:35.909 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@280 -- # nvmf_veth_init 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@223 -- # create_target_ns 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:35.909 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # create_main_bridge 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@105 -- # delete_main_bridge 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # return 0 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up initiator0 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up target0 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target0 up 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up target0_br 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns target0 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:35.910 10.0.0.1 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:27:35.910 10.0.0.2 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up initiator0 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:27:35.910 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up target0_br 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:27:35.911 07:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up initiator1 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up target1 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target1 up 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up target1_br 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns target1 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772163 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:27:35.911 10.0.0.3 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772164 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:27:35.911 10.0.0.4 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up initiator1 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:27:35.911 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:27:35.912 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:27:35.912 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:27:35.912 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:35.912 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:35.912 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:35.912 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:35.912 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:35.912 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:27:35.912 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:27:35.912 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up target1_br 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 2 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator0 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:36.174 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:36.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:27:36.175 00:27:36.175 --- 10.0.0.1 ping statistics --- 00:27:36.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.175 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target0 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target0 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:36.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:27:36.175 00:27:36.175 --- 10.0.0.2 ping statistics --- 00:27:36.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.175 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:27:36.175 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:36.175 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:27:36.175 00:27:36.175 --- 10.0.0.3 ping statistics --- 00:27:36.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.175 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:27:36.175 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:36.175 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:27:36.175 00:27:36.175 --- 10.0.0.4 ping statistics --- 00:27:36.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.175 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # return 0 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator0 00:27:36.175 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target0 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target0 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target1 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:36.176 ' 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=75759 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 75759 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 75759 ']' 00:27:36.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.176 07:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:36.176 [2024-11-20 07:25:00.308033] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:27:36.176 [2024-11-20 07:25:00.308089] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.434 [2024-11-20 07:25:00.445514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.434 [2024-11-20 07:25:00.480423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.434 [2024-11-20 07:25:00.480465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.434 [2024-11-20 07:25:00.480472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.434 [2024-11-20 07:25:00.480477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.434 [2024-11-20 07:25:00.480481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.434 [2024-11-20 07:25:00.480742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.434 [2024-11-20 07:25:00.511508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:37.038 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.038 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:37.038 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:37.038 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:37.038 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:37.038 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.038 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:37.038 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.038 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:37.301 [2024-11-20 07:25:01.222610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.301 [2024-11-20 07:25:01.230690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:37.301 null0 00:27:37.301 [2024-11-20 07:25:01.262645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.301 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.301 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=75791 00:27:37.301 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 75791 /tmp/host.sock 00:27:37.301 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:37.301 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 75791 ']' 00:27:37.301 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:37.301 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.301 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:37.301 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:37.301 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.301 07:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:37.301 [2024-11-20 07:25:01.316599] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:27:37.301 [2024-11-20 07:25:01.316645] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75791 ] 00:27:37.301 [2024-11-20 07:25:01.451231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.301 [2024-11-20 07:25:01.483330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.235 [2024-11-20 07:25:02.224160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.235 07:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.175 [2024-11-20 07:25:03.265242] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:39.175 [2024-11-20 07:25:03.265268] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:39.175 [2024-11-20 07:25:03.265279] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:39.175 [2024-11-20 07:25:03.271269] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:39.175 [2024-11-20 07:25:03.325506] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:39.175 [2024-11-20 07:25:03.326127] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24dbfc0:1 started. 00:27:39.175 [2024-11-20 07:25:03.327390] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:39.175 [2024-11-20 07:25:03.327431] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:39.175 [2024-11-20 07:25:03.327448] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:39.175 [2024-11-20 07:25:03.327460] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:39.175 [2024-11-20 07:25:03.327476] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:39.175 [2024-11-20 07:25:03.333903] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24dbfc0 was disconnected and freed. delete nvme_qpair. 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev target0 00:27:39.175 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_ns_spdk ip link set target0 down 00:27:39.436 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:39.436 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:39.436 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.436 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:39.436 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.436 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:39.436 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.436 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:39.436 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.436 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:39.436 07:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:40.371 07:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:40.371 07:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:40.371 07:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:40.371 07:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:40.371 07:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:40.371 07:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.371 07:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:40.371 07:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.371 07:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:40.371 07:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:41.305 07:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:41.305 07:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:41.305 07:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:41.305 07:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:41.305 07:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:41.305 07:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.305 07:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.305 07:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.305 07:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:41.305 07:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:42.676 07:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:42.676 07:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:42.676 07:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.676 07:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:42.676 07:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:42.676 07:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.676 07:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.676 07:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.676 07:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:42.676 07:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:43.608 07:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:43.608 07:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.608 07:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:43.608 07:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:43.608 07:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:43.608 07:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.608 07:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.608 07:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.608 07:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:43.608 07:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:44.539 07:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:44.539 07:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:44.539 07:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:44.539 07:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.539 07:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:44.539 07:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.539 07:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:44.539 07:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.539 07:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:44.539 07:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:44.796 [2024-11-20 07:25:08.755938] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:44.796 [2024-11-20 07:25:08.755998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:44.796 [2024-11-20 07:25:08.756006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.796 [2024-11-20 07:25:08.756013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:44.796 [2024-11-20 07:25:08.756018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.796 [2024-11-20 07:25:08.756023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:44.796 [2024-11-20 07:25:08.756028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.796 [2024-11-20 07:25:08.756033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:44.796 [2024-11-20 07:25:08.756037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.796 [2024-11-20 07:25:08.756043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:44.796 [2024-11-20 07:25:08.756047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.797 [2024-11-20 07:25:08.756052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b8240 is same with the state(6) to be set 00:27:44.797 [2024-11-20 07:25:08.765933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b8240 (9): Bad file descriptor 00:27:44.797 [2024-11-20 07:25:08.775945] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:44.797 [2024-11-20 07:25:08.775958] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:44.797 [2024-11-20 07:25:08.775961] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:44.797 [2024-11-20 07:25:08.775964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:44.797 [2024-11-20 07:25:08.775985] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:45.731 07:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:45.731 07:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:45.731 07:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.731 07:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:45.731 07:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:45.731 07:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:45.731 07:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:45.731 [2024-11-20 07:25:09.829296] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:27:45.731 [2024-11-20 07:25:09.829422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b8240 with addr=10.0.0.2, port=4420 00:27:45.731 [2024-11-20 07:25:09.829460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b8240 is same with the state(6) to be set 00:27:45.731 [2024-11-20 07:25:09.829518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b8240 (9): Bad file descriptor 00:27:45.731 [2024-11-20 07:25:09.830667] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:45.731 [2024-11-20 07:25:09.830762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:45.731 [2024-11-20 07:25:09.830782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:45.731 [2024-11-20 07:25:09.830801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:45.731 [2024-11-20 07:25:09.830819] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:45.731 [2024-11-20 07:25:09.830831] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:45.731 [2024-11-20 07:25:09.830840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:45.731 [2024-11-20 07:25:09.830858] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:45.731 [2024-11-20 07:25:09.830869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:45.731 07:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.731 07:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:45.731 07:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:46.664 [2024-11-20 07:25:10.830949] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:46.664 [2024-11-20 07:25:10.830990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:46.664 [2024-11-20 07:25:10.831013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:46.664 [2024-11-20 07:25:10.831020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:46.664 [2024-11-20 07:25:10.831026] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:46.664 [2024-11-20 07:25:10.831032] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:46.664 [2024-11-20 07:25:10.831037] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:46.664 [2024-11-20 07:25:10.831041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:46.664 [2024-11-20 07:25:10.831063] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:46.664 [2024-11-20 07:25:10.831097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.664 [2024-11-20 07:25:10.831106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.664 [2024-11-20 07:25:10.831115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.664 [2024-11-20 07:25:10.831121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.664 [2024-11-20 07:25:10.831127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.664 [2024-11-20 07:25:10.831133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.664 [2024-11-20 07:25:10.831139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.664 [2024-11-20 07:25:10.831145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.664 [2024-11-20 07:25:10.831152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.664 [2024-11-20 07:25:10.831157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.664 [2024-11-20 07:25:10.831163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:46.664 [2024-11-20 07:25:10.831777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2443a20 (9): Bad file descriptor 00:27:46.664 [2024-11-20 07:25:10.832787] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:46.664 [2024-11-20 07:25:10.832802] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:46.664 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:46.664 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:46.664 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.664 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.664 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:46.664 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.664 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:46.922 07:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:47.855 07:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:47.855 07:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:47.855 07:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.855 07:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.855 07:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:47.855 07:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:47.855 07:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:47.855 07:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.855 07:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:47.855 07:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:48.789 [2024-11-20 07:25:12.838204] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:48.789 [2024-11-20 07:25:12.838236] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:48.789 [2024-11-20 07:25:12.838246] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:48.789 [2024-11-20 07:25:12.844233] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:48.789 [2024-11-20 07:25:12.898453] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:48.789 [2024-11-20 07:25:12.898961] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2494f00:1 started. 00:27:48.789 [2024-11-20 07:25:12.899903] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:48.789 [2024-11-20 07:25:12.899936] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:48.789 [2024-11-20 07:25:12.899950] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:48.789 [2024-11-20 07:25:12.899961] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:48.789 [2024-11-20 07:25:12.899966] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:48.789 [2024-11-20 07:25:12.907020] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2494f00 was disconnected and freed. delete nvme_qpair. 00:27:48.789 07:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.789 07:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.789 07:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.789 07:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.789 07:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:48.789 07:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:48.789 07:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 75791 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 75791 ']' 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 75791 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75791 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:49.048 killing process with pid 75791 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75791' 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 75791 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 75791 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:49.048 rmmod nvme_tcp 00:27:49.048 rmmod nvme_fabrics 00:27:49.048 rmmod nvme_keyring 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 75759 ']' 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 75759 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 75759 ']' 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 75759 00:27:49.048 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75759 00:27:49.326 killing process with pid 75759 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75759' 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 75759 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 75759 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@254 -- # local dev 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:49.326 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # continue 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # continue 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@274 -- # iptr 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-save 00:27:49.607 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-restore 00:27:49.607 00:27:49.608 real 0m13.836s 00:27:49.608 user 0m23.644s 00:27:49.608 sys 0m2.088s 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.608 ************************************ 00:27:49.608 END TEST nvmf_discovery_remove_ifc 00:27:49.608 ************************************ 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.608 ************************************ 00:27:49.608 START TEST nvmf_identify_kernel_target 00:27:49.608 ************************************ 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:49.608 * Looking for test storage... 00:27:49.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:49.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.608 --rc genhtml_branch_coverage=1 00:27:49.608 --rc genhtml_function_coverage=1 00:27:49.608 --rc genhtml_legend=1 00:27:49.608 --rc geninfo_all_blocks=1 00:27:49.608 --rc geninfo_unexecuted_blocks=1 00:27:49.608 00:27:49.608 ' 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:49.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.608 --rc genhtml_branch_coverage=1 00:27:49.608 --rc genhtml_function_coverage=1 00:27:49.608 --rc genhtml_legend=1 00:27:49.608 --rc geninfo_all_blocks=1 00:27:49.608 --rc geninfo_unexecuted_blocks=1 00:27:49.608 00:27:49.608 ' 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:49.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.608 --rc genhtml_branch_coverage=1 00:27:49.608 --rc genhtml_function_coverage=1 00:27:49.608 --rc genhtml_legend=1 00:27:49.608 --rc geninfo_all_blocks=1 00:27:49.608 --rc geninfo_unexecuted_blocks=1 00:27:49.608 00:27:49.608 ' 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:49.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.608 --rc genhtml_branch_coverage=1 00:27:49.608 --rc genhtml_function_coverage=1 00:27:49.608 --rc genhtml_legend=1 00:27:49.608 --rc geninfo_all_blocks=1 00:27:49.608 --rc geninfo_unexecuted_blocks=1 00:27:49.608 00:27:49.608 ' 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:49.608 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:49.609 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@223 -- # create_target_ns 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # return 0 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up target0 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:49.609 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:27:49.610 10.0.0.1 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:49.610 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:27:49.870 10.0.0.2 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:49.870 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up target1 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772163 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:27:49.871 10.0.0.3 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772164 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:27:49.871 10.0.0.4 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:49.871 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator0 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:49.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:27:49.872 00:27:49.872 --- 10.0.0.1 ping statistics --- 00:27:49.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.872 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target0 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target0 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:49.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.021 ms 00:27:49.872 00:27:49.872 --- 10.0.0.2 ping statistics --- 00:27:49.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.872 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:27:49.872 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:49.872 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:27:49.872 00:27:49.872 --- 10.0.0.3 ping statistics --- 00:27:49.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.872 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:27:49.872 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:27:49.872 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:49.873 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:27:49.873 00:27:49.873 --- 10.0.0.4 ping statistics --- 00:27:49.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.873 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # return 0 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator0 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:49.873 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target0 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target0 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:49.873 ' 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator0 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:49.873 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:49.874 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:27:49.874 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:49.874 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:49.874 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:49.874 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:27:49.874 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:27:49.874 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:27:50.132 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:50.132 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:50.132 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:50.391 Waiting for block devices as requested 00:27:50.391 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:50.391 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:50.391 No valid GPT data, bailing 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n2 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n2 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:27:50.391 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:50.650 No valid GPT data, bailing 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n2 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n3 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n3 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:50.650 No valid GPT data, bailing 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n3 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme1n1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme1n1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:50.650 No valid GPT data, bailing 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme1n1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme1n1 ]] 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme1n1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo tcp 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid=6878406f-1821-4d15-bee4-f9cf994eb227 -a 10.0.0.1 -t tcp -s 4420 00:27:50.650 00:27:50.650 Discovery Log Number of Records 2, Generation counter 2 00:27:50.650 =====Discovery Log Entry 0====== 00:27:50.650 trtype: tcp 00:27:50.650 adrfam: ipv4 00:27:50.650 subtype: current discovery subsystem 00:27:50.650 treq: not specified, sq flow control disable supported 00:27:50.650 portid: 1 00:27:50.650 trsvcid: 4420 00:27:50.650 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:50.650 traddr: 10.0.0.1 00:27:50.650 eflags: none 00:27:50.650 sectype: none 00:27:50.650 =====Discovery Log Entry 1====== 00:27:50.650 trtype: tcp 00:27:50.650 adrfam: ipv4 00:27:50.650 subtype: nvme subsystem 00:27:50.650 treq: not specified, sq flow control disable supported 00:27:50.650 portid: 1 00:27:50.650 trsvcid: 4420 00:27:50.650 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:50.650 traddr: 10.0.0.1 00:27:50.650 eflags: none 00:27:50.650 sectype: none 00:27:50.650 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:50.650 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:50.910 ===================================================== 00:27:50.910 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:50.910 ===================================================== 00:27:50.910 Controller Capabilities/Features 00:27:50.910 ================================ 00:27:50.910 Vendor ID: 0000 00:27:50.910 Subsystem Vendor ID: 0000 00:27:50.910 Serial Number: c01e50c781ff6093d29f 00:27:50.910 Model Number: Linux 00:27:50.910 Firmware Version: 6.8.9-20 00:27:50.910 Recommended Arb Burst: 0 00:27:50.910 IEEE OUI Identifier: 00 00 00 00:27:50.910 Multi-path I/O 00:27:50.910 May have multiple subsystem ports: No 00:27:50.910 May have multiple controllers: No 00:27:50.910 Associated with SR-IOV VF: No 00:27:50.910 Max Data Transfer Size: Unlimited 00:27:50.910 Max Number of Namespaces: 0 00:27:50.910 Max Number of I/O Queues: 1024 00:27:50.910 NVMe Specification Version (VS): 1.3 00:27:50.910 NVMe Specification Version (Identify): 1.3 00:27:50.910 Maximum Queue Entries: 1024 00:27:50.910 Contiguous Queues Required: No 00:27:50.910 Arbitration Mechanisms Supported 00:27:50.910 Weighted Round Robin: Not Supported 00:27:50.910 Vendor Specific: Not Supported 00:27:50.910 Reset Timeout: 7500 ms 00:27:50.910 Doorbell Stride: 4 bytes 00:27:50.910 NVM Subsystem Reset: Not Supported 00:27:50.910 Command Sets Supported 00:27:50.910 NVM Command Set: Supported 00:27:50.910 Boot Partition: Not Supported 00:27:50.910 Memory Page Size Minimum: 4096 bytes 00:27:50.910 Memory Page Size Maximum: 4096 bytes 00:27:50.910 Persistent Memory Region: Not Supported 00:27:50.910 Optional Asynchronous Events Supported 00:27:50.910 Namespace Attribute Notices: Not Supported 00:27:50.910 Firmware Activation Notices: Not Supported 00:27:50.910 ANA Change Notices: Not Supported 00:27:50.910 PLE Aggregate Log Change Notices: Not Supported 00:27:50.910 LBA Status Info Alert Notices: Not Supported 00:27:50.910 EGE Aggregate Log Change Notices: Not Supported 00:27:50.910 Normal NVM Subsystem Shutdown event: Not Supported 00:27:50.910 Zone Descriptor Change Notices: Not Supported 00:27:50.910 Discovery Log Change Notices: Supported 00:27:50.910 Controller Attributes 00:27:50.910 128-bit Host Identifier: Not Supported 00:27:50.910 Non-Operational Permissive Mode: Not Supported 00:27:50.910 NVM Sets: Not Supported 00:27:50.910 Read Recovery Levels: Not Supported 00:27:50.910 Endurance Groups: Not Supported 00:27:50.910 Predictable Latency Mode: Not Supported 00:27:50.910 Traffic Based Keep ALive: Not Supported 00:27:50.910 Namespace Granularity: Not Supported 00:27:50.911 SQ Associations: Not Supported 00:27:50.911 UUID List: Not Supported 00:27:50.911 Multi-Domain Subsystem: Not Supported 00:27:50.911 Fixed Capacity Management: Not Supported 00:27:50.911 Variable Capacity Management: Not Supported 00:27:50.911 Delete Endurance Group: Not Supported 00:27:50.911 Delete NVM Set: Not Supported 00:27:50.911 Extended LBA Formats Supported: Not Supported 00:27:50.911 Flexible Data Placement Supported: Not Supported 00:27:50.911 00:27:50.911 Controller Memory Buffer Support 00:27:50.911 ================================ 00:27:50.911 Supported: No 00:27:50.911 00:27:50.911 Persistent Memory Region Support 00:27:50.911 ================================ 00:27:50.911 Supported: No 00:27:50.911 00:27:50.911 Admin Command Set Attributes 00:27:50.911 ============================ 00:27:50.911 Security Send/Receive: Not Supported 00:27:50.911 Format NVM: Not Supported 00:27:50.911 Firmware Activate/Download: Not Supported 00:27:50.911 Namespace Management: Not Supported 00:27:50.911 Device Self-Test: Not Supported 00:27:50.911 Directives: Not Supported 00:27:50.911 NVMe-MI: Not Supported 00:27:50.911 Virtualization Management: Not Supported 00:27:50.911 Doorbell Buffer Config: Not Supported 00:27:50.911 Get LBA Status Capability: Not Supported 00:27:50.911 Command & Feature Lockdown Capability: Not Supported 00:27:50.911 Abort Command Limit: 1 00:27:50.911 Async Event Request Limit: 1 00:27:50.911 Number of Firmware Slots: N/A 00:27:50.911 Firmware Slot 1 Read-Only: N/A 00:27:50.911 Firmware Activation Without Reset: N/A 00:27:50.911 Multiple Update Detection Support: N/A 00:27:50.911 Firmware Update Granularity: No Information Provided 00:27:50.911 Per-Namespace SMART Log: No 00:27:50.911 Asymmetric Namespace Access Log Page: Not Supported 00:27:50.911 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:50.911 Command Effects Log Page: Not Supported 00:27:50.911 Get Log Page Extended Data: Supported 00:27:50.911 Telemetry Log Pages: Not Supported 00:27:50.911 Persistent Event Log Pages: Not Supported 00:27:50.911 Supported Log Pages Log Page: May Support 00:27:50.911 Commands Supported & Effects Log Page: Not Supported 00:27:50.911 Feature Identifiers & Effects Log Page:May Support 00:27:50.911 NVMe-MI Commands & Effects Log Page: May Support 00:27:50.911 Data Area 4 for Telemetry Log: Not Supported 00:27:50.911 Error Log Page Entries Supported: 1 00:27:50.911 Keep Alive: Not Supported 00:27:50.911 00:27:50.911 NVM Command Set Attributes 00:27:50.911 ========================== 00:27:50.911 Submission Queue Entry Size 00:27:50.911 Max: 1 00:27:50.911 Min: 1 00:27:50.911 Completion Queue Entry Size 00:27:50.911 Max: 1 00:27:50.911 Min: 1 00:27:50.911 Number of Namespaces: 0 00:27:50.911 Compare Command: Not Supported 00:27:50.911 Write Uncorrectable Command: Not Supported 00:27:50.911 Dataset Management Command: Not Supported 00:27:50.911 Write Zeroes Command: Not Supported 00:27:50.911 Set Features Save Field: Not Supported 00:27:50.911 Reservations: Not Supported 00:27:50.911 Timestamp: Not Supported 00:27:50.911 Copy: Not Supported 00:27:50.911 Volatile Write Cache: Not Present 00:27:50.911 Atomic Write Unit (Normal): 1 00:27:50.911 Atomic Write Unit (PFail): 1 00:27:50.911 Atomic Compare & Write Unit: 1 00:27:50.911 Fused Compare & Write: Not Supported 00:27:50.911 Scatter-Gather List 00:27:50.911 SGL Command Set: Supported 00:27:50.911 SGL Keyed: Not Supported 00:27:50.911 SGL Bit Bucket Descriptor: Not Supported 00:27:50.911 SGL Metadata Pointer: Not Supported 00:27:50.911 Oversized SGL: Not Supported 00:27:50.911 SGL Metadata Address: Not Supported 00:27:50.911 SGL Offset: Supported 00:27:50.911 Transport SGL Data Block: Not Supported 00:27:50.911 Replay Protected Memory Block: Not Supported 00:27:50.911 00:27:50.911 Firmware Slot Information 00:27:50.911 ========================= 00:27:50.911 Active slot: 0 00:27:50.911 00:27:50.911 00:27:50.911 Error Log 00:27:50.911 ========= 00:27:50.911 00:27:50.911 Active Namespaces 00:27:50.911 ================= 00:27:50.911 Discovery Log Page 00:27:50.911 ================== 00:27:50.911 Generation Counter: 2 00:27:50.911 Number of Records: 2 00:27:50.911 Record Format: 0 00:27:50.911 00:27:50.911 Discovery Log Entry 0 00:27:50.911 ---------------------- 00:27:50.911 Transport Type: 3 (TCP) 00:27:50.911 Address Family: 1 (IPv4) 00:27:50.911 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:50.911 Entry Flags: 00:27:50.911 Duplicate Returned Information: 0 00:27:50.911 Explicit Persistent Connection Support for Discovery: 0 00:27:50.911 Transport Requirements: 00:27:50.911 Secure Channel: Not Specified 00:27:50.911 Port ID: 1 (0x0001) 00:27:50.911 Controller ID: 65535 (0xffff) 00:27:50.911 Admin Max SQ Size: 32 00:27:50.911 Transport Service Identifier: 4420 00:27:50.911 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:50.911 Transport Address: 10.0.0.1 00:27:50.911 Discovery Log Entry 1 00:27:50.911 ---------------------- 00:27:50.911 Transport Type: 3 (TCP) 00:27:50.911 Address Family: 1 (IPv4) 00:27:50.911 Subsystem Type: 2 (NVM Subsystem) 00:27:50.911 Entry Flags: 00:27:50.911 Duplicate Returned Information: 0 00:27:50.911 Explicit Persistent Connection Support for Discovery: 0 00:27:50.911 Transport Requirements: 00:27:50.911 Secure Channel: Not Specified 00:27:50.911 Port ID: 1 (0x0001) 00:27:50.911 Controller ID: 65535 (0xffff) 00:27:50.911 Admin Max SQ Size: 32 00:27:50.911 Transport Service Identifier: 4420 00:27:50.911 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:50.911 Transport Address: 10.0.0.1 00:27:50.911 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:50.911 get_feature(0x01) failed 00:27:50.911 get_feature(0x02) failed 00:27:50.911 get_feature(0x04) failed 00:27:50.911 ===================================================== 00:27:50.911 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:50.911 ===================================================== 00:27:50.911 Controller Capabilities/Features 00:27:50.911 ================================ 00:27:50.911 Vendor ID: 0000 00:27:50.911 Subsystem Vendor ID: 0000 00:27:50.911 Serial Number: bb691ec3a0317da7e07b 00:27:50.911 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:50.911 Firmware Version: 6.8.9-20 00:27:50.911 Recommended Arb Burst: 6 00:27:50.911 IEEE OUI Identifier: 00 00 00 00:27:50.911 Multi-path I/O 00:27:50.911 May have multiple subsystem ports: Yes 00:27:50.911 May have multiple controllers: Yes 00:27:50.911 Associated with SR-IOV VF: No 00:27:50.911 Max Data Transfer Size: Unlimited 00:27:50.911 Max Number of Namespaces: 1024 00:27:50.911 Max Number of I/O Queues: 128 00:27:50.911 NVMe Specification Version (VS): 1.3 00:27:50.911 NVMe Specification Version (Identify): 1.3 00:27:50.911 Maximum Queue Entries: 1024 00:27:50.911 Contiguous Queues Required: No 00:27:50.911 Arbitration Mechanisms Supported 00:27:50.911 Weighted Round Robin: Not Supported 00:27:50.911 Vendor Specific: Not Supported 00:27:50.911 Reset Timeout: 7500 ms 00:27:50.911 Doorbell Stride: 4 bytes 00:27:50.911 NVM Subsystem Reset: Not Supported 00:27:50.911 Command Sets Supported 00:27:50.911 NVM Command Set: Supported 00:27:50.911 Boot Partition: Not Supported 00:27:50.911 Memory Page Size Minimum: 4096 bytes 00:27:50.911 Memory Page Size Maximum: 4096 bytes 00:27:50.911 Persistent Memory Region: Not Supported 00:27:50.911 Optional Asynchronous Events Supported 00:27:50.911 Namespace Attribute Notices: Supported 00:27:50.911 Firmware Activation Notices: Not Supported 00:27:50.911 ANA Change Notices: Supported 00:27:50.911 PLE Aggregate Log Change Notices: Not Supported 00:27:50.911 LBA Status Info Alert Notices: Not Supported 00:27:50.911 EGE Aggregate Log Change Notices: Not Supported 00:27:50.911 Normal NVM Subsystem Shutdown event: Not Supported 00:27:50.911 Zone Descriptor Change Notices: Not Supported 00:27:50.911 Discovery Log Change Notices: Not Supported 00:27:50.911 Controller Attributes 00:27:50.911 128-bit Host Identifier: Supported 00:27:50.911 Non-Operational Permissive Mode: Not Supported 00:27:50.911 NVM Sets: Not Supported 00:27:50.911 Read Recovery Levels: Not Supported 00:27:50.911 Endurance Groups: Not Supported 00:27:50.911 Predictable Latency Mode: Not Supported 00:27:50.911 Traffic Based Keep ALive: Supported 00:27:50.911 Namespace Granularity: Not Supported 00:27:50.911 SQ Associations: Not Supported 00:27:50.911 UUID List: Not Supported 00:27:50.911 Multi-Domain Subsystem: Not Supported 00:27:50.912 Fixed Capacity Management: Not Supported 00:27:50.912 Variable Capacity Management: Not Supported 00:27:50.912 Delete Endurance Group: Not Supported 00:27:50.912 Delete NVM Set: Not Supported 00:27:50.912 Extended LBA Formats Supported: Not Supported 00:27:50.912 Flexible Data Placement Supported: Not Supported 00:27:50.912 00:27:50.912 Controller Memory Buffer Support 00:27:50.912 ================================ 00:27:50.912 Supported: No 00:27:50.912 00:27:50.912 Persistent Memory Region Support 00:27:50.912 ================================ 00:27:50.912 Supported: No 00:27:50.912 00:27:50.912 Admin Command Set Attributes 00:27:50.912 ============================ 00:27:50.912 Security Send/Receive: Not Supported 00:27:50.912 Format NVM: Not Supported 00:27:50.912 Firmware Activate/Download: Not Supported 00:27:50.912 Namespace Management: Not Supported 00:27:50.912 Device Self-Test: Not Supported 00:27:50.912 Directives: Not Supported 00:27:50.912 NVMe-MI: Not Supported 00:27:50.912 Virtualization Management: Not Supported 00:27:50.912 Doorbell Buffer Config: Not Supported 00:27:50.912 Get LBA Status Capability: Not Supported 00:27:50.912 Command & Feature Lockdown Capability: Not Supported 00:27:50.912 Abort Command Limit: 4 00:27:50.912 Async Event Request Limit: 4 00:27:50.912 Number of Firmware Slots: N/A 00:27:50.912 Firmware Slot 1 Read-Only: N/A 00:27:50.912 Firmware Activation Without Reset: N/A 00:27:50.912 Multiple Update Detection Support: N/A 00:27:50.912 Firmware Update Granularity: No Information Provided 00:27:50.912 Per-Namespace SMART Log: Yes 00:27:50.912 Asymmetric Namespace Access Log Page: Supported 00:27:50.912 ANA Transition Time : 10 sec 00:27:50.912 00:27:50.912 Asymmetric Namespace Access Capabilities 00:27:50.912 ANA Optimized State : Supported 00:27:50.912 ANA Non-Optimized State : Supported 00:27:50.912 ANA Inaccessible State : Supported 00:27:50.912 ANA Persistent Loss State : Supported 00:27:50.912 ANA Change State : Supported 00:27:50.912 ANAGRPID is not changed : No 00:27:50.912 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:50.912 00:27:50.912 ANA Group Identifier Maximum : 128 00:27:50.912 Number of ANA Group Identifiers : 128 00:27:50.912 Max Number of Allowed Namespaces : 1024 00:27:50.912 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:50.912 Command Effects Log Page: Supported 00:27:50.912 Get Log Page Extended Data: Supported 00:27:50.912 Telemetry Log Pages: Not Supported 00:27:50.912 Persistent Event Log Pages: Not Supported 00:27:50.912 Supported Log Pages Log Page: May Support 00:27:50.912 Commands Supported & Effects Log Page: Not Supported 00:27:50.912 Feature Identifiers & Effects Log Page:May Support 00:27:50.912 NVMe-MI Commands & Effects Log Page: May Support 00:27:50.912 Data Area 4 for Telemetry Log: Not Supported 00:27:50.912 Error Log Page Entries Supported: 128 00:27:50.912 Keep Alive: Supported 00:27:50.912 Keep Alive Granularity: 1000 ms 00:27:50.912 00:27:50.912 NVM Command Set Attributes 00:27:50.912 ========================== 00:27:50.912 Submission Queue Entry Size 00:27:50.912 Max: 64 00:27:50.912 Min: 64 00:27:50.912 Completion Queue Entry Size 00:27:50.912 Max: 16 00:27:50.912 Min: 16 00:27:50.912 Number of Namespaces: 1024 00:27:50.912 Compare Command: Not Supported 00:27:50.912 Write Uncorrectable Command: Not Supported 00:27:50.912 Dataset Management Command: Supported 00:27:50.912 Write Zeroes Command: Supported 00:27:50.912 Set Features Save Field: Not Supported 00:27:50.912 Reservations: Not Supported 00:27:50.912 Timestamp: Not Supported 00:27:50.912 Copy: Not Supported 00:27:50.912 Volatile Write Cache: Present 00:27:50.912 Atomic Write Unit (Normal): 1 00:27:50.912 Atomic Write Unit (PFail): 1 00:27:50.912 Atomic Compare & Write Unit: 1 00:27:50.912 Fused Compare & Write: Not Supported 00:27:50.912 Scatter-Gather List 00:27:50.912 SGL Command Set: Supported 00:27:50.912 SGL Keyed: Not Supported 00:27:50.912 SGL Bit Bucket Descriptor: Not Supported 00:27:50.912 SGL Metadata Pointer: Not Supported 00:27:50.912 Oversized SGL: Not Supported 00:27:50.912 SGL Metadata Address: Not Supported 00:27:50.912 SGL Offset: Supported 00:27:50.912 Transport SGL Data Block: Not Supported 00:27:50.912 Replay Protected Memory Block: Not Supported 00:27:50.912 00:27:50.912 Firmware Slot Information 00:27:50.912 ========================= 00:27:50.912 Active slot: 0 00:27:50.912 00:27:50.912 Asymmetric Namespace Access 00:27:50.912 =========================== 00:27:50.912 Change Count : 0 00:27:50.912 Number of ANA Group Descriptors : 1 00:27:50.912 ANA Group Descriptor : 0 00:27:50.912 ANA Group ID : 1 00:27:50.912 Number of NSID Values : 1 00:27:50.912 Change Count : 0 00:27:50.912 ANA State : 1 00:27:50.912 Namespace Identifier : 1 00:27:50.912 00:27:50.912 Commands Supported and Effects 00:27:50.912 ============================== 00:27:50.912 Admin Commands 00:27:50.912 -------------- 00:27:50.912 Get Log Page (02h): Supported 00:27:50.912 Identify (06h): Supported 00:27:50.912 Abort (08h): Supported 00:27:50.912 Set Features (09h): Supported 00:27:50.912 Get Features (0Ah): Supported 00:27:50.912 Asynchronous Event Request (0Ch): Supported 00:27:50.912 Keep Alive (18h): Supported 00:27:50.912 I/O Commands 00:27:50.912 ------------ 00:27:50.912 Flush (00h): Supported 00:27:50.912 Write (01h): Supported LBA-Change 00:27:50.912 Read (02h): Supported 00:27:50.912 Write Zeroes (08h): Supported LBA-Change 00:27:50.912 Dataset Management (09h): Supported 00:27:50.912 00:27:50.912 Error Log 00:27:50.912 ========= 00:27:50.912 Entry: 0 00:27:50.912 Error Count: 0x3 00:27:50.912 Submission Queue Id: 0x0 00:27:50.912 Command Id: 0x5 00:27:50.912 Phase Bit: 0 00:27:50.912 Status Code: 0x2 00:27:50.912 Status Code Type: 0x0 00:27:50.912 Do Not Retry: 1 00:27:50.912 Error Location: 0x28 00:27:50.912 LBA: 0x0 00:27:50.912 Namespace: 0x0 00:27:50.912 Vendor Log Page: 0x0 00:27:50.912 ----------- 00:27:50.912 Entry: 1 00:27:50.912 Error Count: 0x2 00:27:50.912 Submission Queue Id: 0x0 00:27:50.912 Command Id: 0x5 00:27:50.912 Phase Bit: 0 00:27:50.912 Status Code: 0x2 00:27:50.912 Status Code Type: 0x0 00:27:50.912 Do Not Retry: 1 00:27:50.912 Error Location: 0x28 00:27:50.912 LBA: 0x0 00:27:50.912 Namespace: 0x0 00:27:50.912 Vendor Log Page: 0x0 00:27:50.912 ----------- 00:27:50.912 Entry: 2 00:27:50.912 Error Count: 0x1 00:27:50.912 Submission Queue Id: 0x0 00:27:50.912 Command Id: 0x4 00:27:50.912 Phase Bit: 0 00:27:50.912 Status Code: 0x2 00:27:50.912 Status Code Type: 0x0 00:27:50.912 Do Not Retry: 1 00:27:50.912 Error Location: 0x28 00:27:50.912 LBA: 0x0 00:27:50.912 Namespace: 0x0 00:27:50.912 Vendor Log Page: 0x0 00:27:50.912 00:27:50.912 Number of Queues 00:27:50.912 ================ 00:27:50.912 Number of I/O Submission Queues: 128 00:27:50.912 Number of I/O Completion Queues: 128 00:27:50.912 00:27:50.912 ZNS Specific Controller Data 00:27:50.912 ============================ 00:27:50.912 Zone Append Size Limit: 0 00:27:50.912 00:27:50.912 00:27:50.912 Active Namespaces 00:27:50.912 ================= 00:27:50.912 get_feature(0x05) failed 00:27:50.912 Namespace ID:1 00:27:50.912 Command Set Identifier: NVM (00h) 00:27:50.912 Deallocate: Supported 00:27:50.912 Deallocated/Unwritten Error: Not Supported 00:27:50.912 Deallocated Read Value: Unknown 00:27:50.912 Deallocate in Write Zeroes: Not Supported 00:27:50.912 Deallocated Guard Field: 0xFFFF 00:27:50.912 Flush: Supported 00:27:50.912 Reservation: Not Supported 00:27:50.912 Namespace Sharing Capabilities: Multiple Controllers 00:27:50.912 Size (in LBAs): 1310720 (5GiB) 00:27:50.912 Capacity (in LBAs): 1310720 (5GiB) 00:27:50.912 Utilization (in LBAs): 1310720 (5GiB) 00:27:50.912 UUID: e4e97467-dfb2-47ad-95e0-845815566979 00:27:50.912 Thin Provisioning: Not Supported 00:27:50.912 Per-NS Atomic Units: Yes 00:27:50.912 Atomic Boundary Size (Normal): 0 00:27:50.912 Atomic Boundary Size (PFail): 0 00:27:50.912 Atomic Boundary Offset: 0 00:27:50.912 NGUID/EUI64 Never Reused: No 00:27:50.912 ANA group ID: 1 00:27:50.912 Namespace Write Protected: No 00:27:50.912 Number of LBA Formats: 1 00:27:50.912 Current LBA Format: LBA Format #00 00:27:50.912 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:27:50.913 00:27:50.913 07:25:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:50.913 07:25:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:50.913 07:25:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:52.286 rmmod nvme_tcp 00:27:52.286 rmmod nvme_fabrics 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@254 -- # local dev 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:52.286 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # continue 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # continue 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@274 -- # iptr 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-save 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-restore 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:27:52.544 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:53.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:53.110 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.369 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.369 00:27:53.369 real 0m3.796s 00:27:53.369 user 0m0.885s 00:27:53.369 sys 0m1.087s 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.369 ************************************ 00:27:53.369 END TEST nvmf_identify_kernel_target 00:27:53.369 ************************************ 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.369 ************************************ 00:27:53.369 START TEST nvmf_auth_host 00:27:53.369 ************************************ 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:53.369 * Looking for test storage... 00:27:53.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:53.369 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:53.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.370 --rc genhtml_branch_coverage=1 00:27:53.370 --rc genhtml_function_coverage=1 00:27:53.370 --rc genhtml_legend=1 00:27:53.370 --rc geninfo_all_blocks=1 00:27:53.370 --rc geninfo_unexecuted_blocks=1 00:27:53.370 00:27:53.370 ' 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:53.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.370 --rc genhtml_branch_coverage=1 00:27:53.370 --rc genhtml_function_coverage=1 00:27:53.370 --rc genhtml_legend=1 00:27:53.370 --rc geninfo_all_blocks=1 00:27:53.370 --rc geninfo_unexecuted_blocks=1 00:27:53.370 00:27:53.370 ' 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:53.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.370 --rc genhtml_branch_coverage=1 00:27:53.370 --rc genhtml_function_coverage=1 00:27:53.370 --rc genhtml_legend=1 00:27:53.370 --rc geninfo_all_blocks=1 00:27:53.370 --rc geninfo_unexecuted_blocks=1 00:27:53.370 00:27:53.370 ' 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:53.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.370 --rc genhtml_branch_coverage=1 00:27:53.370 --rc genhtml_function_coverage=1 00:27:53.370 --rc genhtml_legend=1 00:27:53.370 --rc geninfo_all_blocks=1 00:27:53.370 --rc geninfo_unexecuted_blocks=1 00:27:53.370 00:27:53.370 ' 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:53.370 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:53.630 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:53.630 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@280 -- # nvmf_veth_init 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@223 -- # create_target_ns 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # create_main_bridge 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@105 -- # delete_main_bridge 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # return 0 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up initiator0 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up target0 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target0 up 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up target0_br 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # add_to_ns target0 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:27:53.631 10.0.0.1 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:27:53.631 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:27:53.632 10.0.0.2 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@66 -- # set_up initiator0 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up target0_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up initiator1 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up target1 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target1 up 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up target1_br 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # add_to_ns target1 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772163 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:27:53.632 10.0.0.3 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772164 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:27:53.632 10.0.0.4 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@66 -- # set_up initiator1 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:27:53.632 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up target1_br 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 2 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:53.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:27:53.633 00:27:53.633 --- 10.0.0.1 ping statistics --- 00:27:53.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.633 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target0 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target0 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:53.633 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:53.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:27:53.892 00:27:53.892 --- 10.0.0.2 ping statistics --- 00:27:53.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.892 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator1 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:27:53.892 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:27:53.893 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:53.893 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:27:53.893 00:27:53.893 --- 10.0.0.3 ping statistics --- 00:27:53.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.893 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:27:53.893 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:53.893 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:27:53.893 00:27:53.893 --- 10.0.0.4 ping statistics --- 00:27:53.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.893 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # return 0 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target0 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target0 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target1 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:53.893 ' 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:53.893 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.894 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.894 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=76771 00:27:53.894 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 76771 00:27:53.894 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 76771 ']' 00:27:53.894 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.894 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.894 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.894 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:53.894 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.894 07:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=d6dfd3319a1223806a5826ffa78da91c 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.64O 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key d6dfd3319a1223806a5826ffa78da91c 0 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 d6dfd3319a1223806a5826ffa78da91c 0 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=d6dfd3319a1223806a5826ffa78da91c 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.64O 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.64O 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.64O 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=5a8f1be2fdb45f8ef1e95c0a8a70f2bf3203fc37c3cfa924e7f04dcbf1fedc6c 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.8UE 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 5a8f1be2fdb45f8ef1e95c0a8a70f2bf3203fc37c3cfa924e7f04dcbf1fedc6c 3 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 5a8f1be2fdb45f8ef1e95c0a8a70f2bf3203fc37c3cfa924e7f04dcbf1fedc6c 3 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=5a8f1be2fdb45f8ef1e95c0a8a70f2bf3203fc37c3cfa924e7f04dcbf1fedc6c 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.8UE 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.8UE 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.8UE 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=d2018fe24da378174481a399f7e9a8a6969fd701aaa07378 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.y2r 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key d2018fe24da378174481a399f7e9a8a6969fd701aaa07378 0 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 d2018fe24da378174481a399f7e9a8a6969fd701aaa07378 0 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=d2018fe24da378174481a399f7e9a8a6969fd701aaa07378 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.y2r 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.y2r 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.y2r 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=a0061f3314fbb56a16a64eb184fedaf559c32caa612788c5 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.mNN 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key a0061f3314fbb56a16a64eb184fedaf559c32caa612788c5 2 00:27:54.827 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 a0061f3314fbb56a16a64eb184fedaf559c32caa612788c5 2 00:27:54.828 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:54.828 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:54.828 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=a0061f3314fbb56a16a64eb184fedaf559c32caa612788c5 00:27:54.828 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:27:54.828 07:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.mNN 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.mNN 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.mNN 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=f5c41ee6b096cc2c7fad7b2ed157b5c2 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.qIl 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key f5c41ee6b096cc2c7fad7b2ed157b5c2 1 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 f5c41ee6b096cc2c7fad7b2ed157b5c2 1 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=f5c41ee6b096cc2c7fad7b2ed157b5c2 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.qIl 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.qIl 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.qIl 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=751eac5d968db2b011a43ceaca313897 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.0fY 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 751eac5d968db2b011a43ceaca313897 1 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 751eac5d968db2b011a43ceaca313897 1 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=751eac5d968db2b011a43ceaca313897 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.0fY 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.0fY 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.0fY 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=5f28d4f450dcbe37cb2015989b1daf05140e6fb918522580 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.EU6 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 5f28d4f450dcbe37cb2015989b1daf05140e6fb918522580 2 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 5f28d4f450dcbe37cb2015989b1daf05140e6fb918522580 2 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:55.087 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=5f28d4f450dcbe37cb2015989b1daf05140e6fb918522580 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.EU6 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.EU6 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.EU6 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=2182f4ef6cbf3f573e07744c5eaa4088 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.flj 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 2182f4ef6cbf3f573e07744c5eaa4088 0 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 2182f4ef6cbf3f573e07744c5eaa4088 0 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=2182f4ef6cbf3f573e07744c5eaa4088 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.flj 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.flj 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.flj 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=614b9140c033a82d372f5160eaadb53997d0f7285df4c82e7a10287a11eab64c 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.y25 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 614b9140c033a82d372f5160eaadb53997d0f7285df4c82e7a10287a11eab64c 3 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 614b9140c033a82d372f5160eaadb53997d0f7285df4c82e7a10287a11eab64c 3 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=614b9140c033a82d372f5160eaadb53997d0f7285df4c82e7a10287a11eab64c 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.y25 00:27:55.088 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.y25 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.y25 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 76771 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 76771 ']' 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.64O 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.8UE ]] 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8UE 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.y2r 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.mNN ]] 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mNN 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.qIl 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.0fY ]] 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0fY 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.EU6 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.flj ]] 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.flj 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.346 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.y25 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:55.604 07:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:55.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:55.862 Waiting for block devices as requested 00:27:55.862 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:55.862 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:56.120 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:27:56.120 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:56.120 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:27:56.120 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:56.120 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:56.120 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:56.120 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:27:56.120 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:56.120 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:56.379 No valid GPT data, bailing 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n2 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n2 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:56.380 No valid GPT data, bailing 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n2 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n3 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n3 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:56.380 No valid GPT data, bailing 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n3 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme1n1 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme1n1 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:27:56.380 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:56.638 No valid GPT data, bailing 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme1n1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme1n1 ]] 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme1n1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo tcp 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid=6878406f-1821-4d15-bee4-f9cf994eb227 -a 10.0.0.1 -t tcp -s 4420 00:27:56.639 00:27:56.639 Discovery Log Number of Records 2, Generation counter 2 00:27:56.639 =====Discovery Log Entry 0====== 00:27:56.639 trtype: tcp 00:27:56.639 adrfam: ipv4 00:27:56.639 subtype: current discovery subsystem 00:27:56.639 treq: not specified, sq flow control disable supported 00:27:56.639 portid: 1 00:27:56.639 trsvcid: 4420 00:27:56.639 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:56.639 traddr: 10.0.0.1 00:27:56.639 eflags: none 00:27:56.639 sectype: none 00:27:56.639 =====Discovery Log Entry 1====== 00:27:56.639 trtype: tcp 00:27:56.639 adrfam: ipv4 00:27:56.639 subtype: nvme subsystem 00:27:56.639 treq: not specified, sq flow control disable supported 00:27:56.639 portid: 1 00:27:56.639 trsvcid: 4420 00:27:56.639 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:56.639 traddr: 10.0.0.1 00:27:56.639 eflags: none 00:27:56.639 sectype: none 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.639 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.898 nvme0n1 00:27:56.898 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.898 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.898 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.899 07:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.899 nvme0n1 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:56.899 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.158 nvme0n1 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:57.158 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.159 nvme0n1 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.159 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.418 nvme0n1 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:57.418 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:57.419 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:57.419 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:57.419 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.419 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.419 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.677 nvme0n1 00:27:57.677 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.677 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.677 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.677 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.677 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.677 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.677 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.677 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.678 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.936 07:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.936 nvme0n1 00:27:57.936 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.936 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.936 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.936 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.936 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.936 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.936 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.936 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.936 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.936 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.194 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.195 nvme0n1 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.195 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.454 nvme0n1 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.454 nvme0n1 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.454 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:58.712 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.713 nvme0n1 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.713 07:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.307 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.566 nvme0n1 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:59.566 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:59.567 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:59.567 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:59.567 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:59.567 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.567 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.567 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.825 nvme0n1 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:59.825 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:59.826 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:59.826 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:27:59.826 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:59.826 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:59.826 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:59.826 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:59.826 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:59.826 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:59.826 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.826 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.826 07:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.826 nvme0n1 00:27:59.826 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.826 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.826 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.826 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.826 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.098 nvme0n1 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.098 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.357 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.358 nvme0n1 00:28:00.358 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.358 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.358 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.358 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.358 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.358 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.616 07:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.991 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.557 nvme0n1 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.557 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.816 nvme0n1 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.816 07:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.074 nvme0n1 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.075 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.641 nvme0n1 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:03.641 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:03.642 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:03.642 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:03.642 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:03.642 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.642 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.642 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.919 nvme0n1 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:03.919 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:03.920 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:03.920 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:03.920 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:03.920 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:03.920 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:03.920 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:03.920 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:03.920 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:03.920 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:03.920 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:03.920 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.920 07:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.485 nvme0n1 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.485 07:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.051 nvme0n1 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:05.051 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.052 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.616 nvme0n1 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.616 07:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.182 nvme0n1 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.182 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.748 nvme0n1 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.748 nvme0n1 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.748 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.007 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.008 07:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.008 nvme0n1 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.008 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.267 nvme0n1 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:07.267 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.268 nvme0n1 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.268 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.525 nvme0n1 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.525 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.526 nvme0n1 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.526 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.784 nvme0n1 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.784 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.785 07:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.043 nvme0n1 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.043 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.044 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.301 nvme0n1 00:28:08.301 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.301 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.301 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.301 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.301 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.301 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.301 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.301 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.301 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.301 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.302 nvme0n1 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:08.302 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.560 nvme0n1 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:08.560 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.818 nvme0n1 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.818 07:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.818 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.075 nvme0n1 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:09.075 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.076 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.334 nvme0n1 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.334 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.592 nvme0n1 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.592 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.593 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.851 nvme0n1 00:28:09.851 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.851 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.851 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.851 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.851 07:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:09.852 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:10.110 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:10.110 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:10.110 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:10.110 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:10.110 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.110 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.110 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.387 nvme0n1 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:10.387 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.388 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.652 nvme0n1 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.652 07:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.910 nvme0n1 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.910 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.169 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.427 nvme0n1 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.427 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.992 nvme0n1 00:28:11.992 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.992 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.993 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.993 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.993 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.993 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.993 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.993 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.993 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.993 07:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.993 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.560 nvme0n1 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:12.560 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:12.561 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:12.561 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:12.561 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.561 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.561 07:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.125 nvme0n1 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:13.125 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:13.126 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:13.126 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:13.126 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:13.126 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.126 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.716 nvme0n1 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:13.716 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.717 07:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.358 nvme0n1 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:14.358 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.359 nvme0n1 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.359 nvme0n1 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:14.359 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:14.360 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:14.360 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:14.360 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:14.360 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.360 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.360 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.618 nvme0n1 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.618 nvme0n1 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.618 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.876 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.876 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.876 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:14.876 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.876 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.876 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.876 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.876 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:14.876 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.877 nvme0n1 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:14.877 07:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:14.877 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:14.877 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:14.877 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:14.877 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.877 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.877 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.135 nvme0n1 00:28:15.135 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.135 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.135 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.135 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.135 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.136 nvme0n1 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:15.136 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:15.137 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:15.137 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:15.137 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.395 nvme0n1 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:15.395 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.396 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.664 nvme0n1 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.664 nvme0n1 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:15.664 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:15.923 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:15.923 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:15.923 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:15.923 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.923 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.923 07:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.923 nvme0n1 00:28:15.923 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.923 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.923 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.923 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.923 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.181 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.439 nvme0n1 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.439 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.440 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.698 nvme0n1 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.698 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.955 nvme0n1 00:28:16.955 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.955 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.955 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.955 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.955 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:16.956 07:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.956 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.213 nvme0n1 00:28:17.213 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.214 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.471 nvme0n1 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:17.471 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:17.472 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:17.787 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:17.787 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:17.787 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:17.787 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:17.787 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:17.787 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:17.787 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:17.787 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:17.787 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:17.787 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:17.788 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:17.788 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.788 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.788 07:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.045 nvme0n1 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.045 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.046 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.304 nvme0n1 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:18.304 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:18.563 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:18.563 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:18.563 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:18.563 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:18.563 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:18.563 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:18.563 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.563 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.563 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.820 nvme0n1 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.820 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.821 07:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.386 nvme0n1 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZkZmQzMzE5YTEyMjM4MDZhNTgyNmZmYTc4ZGE5MWPjGrZT: 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: ]] 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWE4ZjFiZTJmZGI0NWY4ZWYxZTk1YzBhOGE3MGYyYmYzMjAzZmMzN2MzY2ZhOTI0ZTdmMDRkY2JmMWZlZGM2Y68re1Q=: 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.386 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.951 nvme0n1 00:28:19.951 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.951 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.951 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.951 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.951 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.951 07:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:19.951 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:19.952 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:19.952 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:19.952 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:19.952 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:19.952 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:19.952 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.952 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.952 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.515 nvme0n1 00:28:20.515 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.515 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.515 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.515 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.515 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.515 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.515 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.515 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.515 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.515 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.772 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.772 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.772 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.773 07:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.030 nvme0n1 00:28:21.030 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.030 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.030 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.030 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.030 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYyOGQ0ZjQ1MGRjYmUzN2NiMjAxNTk4OWIxZGFmMDUxNDBlNmZiOTE4NTIyNTgwkcscgw==: 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: ]] 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE4MmY0ZWY2Y2JmM2Y1NzNlMDc3NDRjNWVhYTQwODiZuW7i: 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.323 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.579 nvme0n1 00:28:21.579 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjE0YjkxNDBjMDMzYTgyZDM3MmY1MTYwZWFhZGI1Mzk5N2QwZjcyODVkZjRjODJlN2ExMDI4N2ExMWVhYjY0Y1wU2kE=: 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.838 07:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.402 nvme0n1 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:22.402 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.403 request: 00:28:22.403 { 00:28:22.403 "name": "nvme0", 00:28:22.403 "trtype": "tcp", 00:28:22.403 "traddr": "10.0.0.1", 00:28:22.403 "adrfam": "ipv4", 00:28:22.403 "trsvcid": "4420", 00:28:22.403 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:22.403 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:22.403 "prchk_reftag": false, 00:28:22.403 "prchk_guard": false, 00:28:22.403 "hdgst": false, 00:28:22.403 "ddgst": false, 00:28:22.403 "allow_unrecognized_csi": false, 00:28:22.403 "method": "bdev_nvme_attach_controller", 00:28:22.403 "req_id": 1 00:28:22.403 } 00:28:22.403 Got JSON-RPC error response 00:28:22.403 response: 00:28:22.403 { 00:28:22.403 "code": -5, 00:28:22.403 "message": "Input/output error" 00:28:22.403 } 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.403 request: 00:28:22.403 { 00:28:22.403 "name": "nvme0", 00:28:22.403 "trtype": "tcp", 00:28:22.403 "traddr": "10.0.0.1", 00:28:22.403 "adrfam": "ipv4", 00:28:22.403 "trsvcid": "4420", 00:28:22.403 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:22.403 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:22.403 "prchk_reftag": false, 00:28:22.403 "prchk_guard": false, 00:28:22.403 "hdgst": false, 00:28:22.403 "ddgst": false, 00:28:22.403 "dhchap_key": "key2", 00:28:22.403 "allow_unrecognized_csi": false, 00:28:22.403 "method": "bdev_nvme_attach_controller", 00:28:22.403 "req_id": 1 00:28:22.403 } 00:28:22.403 Got JSON-RPC error response 00:28:22.403 response: 00:28:22.403 { 00:28:22.403 "code": -5, 00:28:22.403 "message": "Input/output error" 00:28:22.403 } 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:22.403 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.404 request: 00:28:22.404 { 00:28:22.404 "name": "nvme0", 00:28:22.404 "trtype": "tcp", 00:28:22.404 "traddr": "10.0.0.1", 00:28:22.404 "adrfam": "ipv4", 00:28:22.404 "trsvcid": "4420", 00:28:22.404 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:22.404 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:22.404 "prchk_reftag": false, 00:28:22.404 "prchk_guard": false, 00:28:22.404 "hdgst": false, 00:28:22.404 "ddgst": false, 00:28:22.404 "dhchap_key": "key1", 00:28:22.404 "dhchap_ctrlr_key": "ckey2", 00:28:22.404 "allow_unrecognized_csi": false, 00:28:22.404 "method": "bdev_nvme_attach_controller", 00:28:22.404 "req_id": 1 00:28:22.404 } 00:28:22.404 Got JSON-RPC error response 00:28:22.404 response: 00:28:22.404 { 00:28:22.404 "code": -5, 00:28:22.404 "message": "Input/output error" 00:28:22.404 } 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.404 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.662 nvme0n1 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.662 request: 00:28:22.662 { 00:28:22.662 "name": "nvme0", 00:28:22.662 "dhchap_key": "key1", 00:28:22.662 "dhchap_ctrlr_key": "ckey2", 00:28:22.662 "method": "bdev_nvme_set_keys", 00:28:22.662 "req_id": 1 00:28:22.662 } 00:28:22.662 Got JSON-RPC error response 00:28:22.662 response: 00:28:22.662 { 00:28:22.662 "code": -13, 00:28:22.662 "message": "Permission denied" 00:28:22.662 } 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:22.662 07:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDIwMThmZTI0ZGEzNzgxNzQ0ODFhMzk5ZjdlOWE4YTY5NjlmZDcwMWFhYTA3Mzc4zYluZA==: 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: ]] 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAwNjFmMzMxNGZiYjU2YTE2YTY0ZWIxODRmZWRhZjU1OWMzMmNhYTYxMjc4OGM10ZW4/w==: 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:23.694 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:23.695 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:28:23.695 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:23.695 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:23.695 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:23.695 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:23.695 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:23.695 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:23.695 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:23.695 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.695 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.976 nvme0n1 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjVjNDFlZTZiMDk2Y2MyYzdmYWQ3YjJlZDE1N2I1YzKgmRBx: 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: ]] 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzUxZWFjNWQ5NjhkYjJiMDExYTQzY2VhY2EzMTM4OTdILFMf: 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.976 request: 00:28:23.976 { 00:28:23.976 "name": "nvme0", 00:28:23.976 "dhchap_key": "key2", 00:28:23.976 "dhchap_ctrlr_key": "ckey1", 00:28:23.976 "method": "bdev_nvme_set_keys", 00:28:23.976 "req_id": 1 00:28:23.976 } 00:28:23.976 Got JSON-RPC error response 00:28:23.976 response: 00:28:23.976 { 00:28:23.976 "code": -13, 00:28:23.976 "message": "Permission denied" 00:28:23.976 } 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:23.976 07:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:24.911 07:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:24.911 rmmod nvme_tcp 00:28:24.911 rmmod nvme_fabrics 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 76771 ']' 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 76771 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 76771 ']' 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 76771 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76771 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:24.911 killing process with pid 76771 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76771' 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 76771 00:28:24.911 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 76771 00:28:25.188 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:25.188 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:28:25.188 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@254 -- # local dev 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # continue 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # continue 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@274 -- # iptr 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-save 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-restore 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:28:25.189 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:28:25.448 07:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:25.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:25.964 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:25.964 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:25.964 07:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.64O /tmp/spdk.key-null.y2r /tmp/spdk.key-sha256.qIl /tmp/spdk.key-sha384.EU6 /tmp/spdk.key-sha512.y25 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:28:25.964 07:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:26.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:26.223 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:26.223 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:26.223 00:28:26.223 real 0m32.982s 00:28:26.223 user 0m29.528s 00:28:26.223 sys 0m3.314s 00:28:26.223 07:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.223 ************************************ 00:28:26.223 END TEST nvmf_auth_host 00:28:26.223 ************************************ 00:28:26.223 07:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.223 07:25:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:26.223 07:25:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:26.223 07:25:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.482 ************************************ 00:28:26.482 START TEST nvmf_digest 00:28:26.482 ************************************ 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:26.482 * Looking for test storage... 00:28:26.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:26.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.482 --rc genhtml_branch_coverage=1 00:28:26.482 --rc genhtml_function_coverage=1 00:28:26.482 --rc genhtml_legend=1 00:28:26.482 --rc geninfo_all_blocks=1 00:28:26.482 --rc geninfo_unexecuted_blocks=1 00:28:26.482 00:28:26.482 ' 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:26.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.482 --rc genhtml_branch_coverage=1 00:28:26.482 --rc genhtml_function_coverage=1 00:28:26.482 --rc genhtml_legend=1 00:28:26.482 --rc geninfo_all_blocks=1 00:28:26.482 --rc geninfo_unexecuted_blocks=1 00:28:26.482 00:28:26.482 ' 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:26.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.482 --rc genhtml_branch_coverage=1 00:28:26.482 --rc genhtml_function_coverage=1 00:28:26.482 --rc genhtml_legend=1 00:28:26.482 --rc geninfo_all_blocks=1 00:28:26.482 --rc geninfo_unexecuted_blocks=1 00:28:26.482 00:28:26.482 ' 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:26.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.482 --rc genhtml_branch_coverage=1 00:28:26.482 --rc genhtml_function_coverage=1 00:28:26.482 --rc genhtml_legend=1 00:28:26.482 --rc geninfo_all_blocks=1 00:28:26.482 --rc geninfo_unexecuted_blocks=1 00:28:26.482 00:28:26.482 ' 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.482 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # : 0 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:26.483 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # remove_target_ns 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@280 -- # nvmf_veth_init 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@223 -- # create_target_ns 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # create_main_bridge 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@105 -- # delete_main_bridge 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # return 0 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@28 -- # local -g _dev 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up initiator0 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up target0 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target0 up 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up target0_br 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:28:26.483 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # add_to_ns target0 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772161 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:28:26.484 10.0.0.1 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772162 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:26.484 10.0.0.2 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@66 -- # set_up initiator0 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:28:26.484 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:28:26.743 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:28:26.743 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:28:26.743 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:26.743 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up target0_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up initiator1 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up target1 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target1 up 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up target1_br 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # add_to_ns target1 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772163 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:28:26.744 10.0.0.3 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772164 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:28:26.744 10.0.0.4 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@66 -- # set_up initiator1 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:28:26.744 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up target1_br 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@38 -- # ping_ips 2 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator0 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:26.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:28:26.745 00:28:26.745 --- 10.0.0.1 ping statistics --- 00:28:26.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.745 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target0 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target0 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:26.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.020 ms 00:28:26.745 00:28:26.745 --- 10.0.0.2 ping statistics --- 00:28:26.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.745 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:28:26.745 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:26.745 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:28:26.745 00:28:26.745 --- 10.0.0.3 ping statistics --- 00:28:26.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.745 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target1 00:28:26.745 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:28:26.746 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:26.746 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:28:26.746 00:28:26.746 --- 10.0.0.4 ping statistics --- 00:28:26.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.746 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # return 0 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator0 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target0 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target0 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target1 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:28:26.746 ' 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.746 ************************************ 00:28:26.746 START TEST nvmf_digest_clean 00:28:26.746 ************************************ 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.746 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@328 -- # nvmfpid=78652 00:28:26.747 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@329 -- # waitforlisten 78652 00:28:26.747 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:26.747 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 78652 ']' 00:28:26.747 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.747 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.747 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.747 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.747 07:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:27.005 [2024-11-20 07:25:50.943918] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:27.005 [2024-11-20 07:25:50.943972] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.005 [2024-11-20 07:25:51.080237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.005 [2024-11-20 07:25:51.113629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.005 [2024-11-20 07:25:51.113816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.005 [2024-11-20 07:25:51.113828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.005 [2024-11-20 07:25:51.113833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.005 [2024-11-20 07:25:51.113838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.005 [2024-11-20 07:25:51.114100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:27.938 [2024-11-20 07:25:51.835635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:27.938 null0 00:28:27.938 [2024-11-20 07:25:51.873636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.938 [2024-11-20 07:25:51.897701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78684 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78684 /var/tmp/bperf.sock 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 78684 ']' 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:27.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.938 07:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:27.938 [2024-11-20 07:25:51.938134] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:27.938 [2024-11-20 07:25:51.938342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78684 ] 00:28:27.938 [2024-11-20 07:25:52.076958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.938 [2024-11-20 07:25:52.112525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.872 07:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.872 07:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:28.872 07:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:28.872 07:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:28.872 07:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:28.872 [2024-11-20 07:25:53.022509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:28.872 07:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.872 07:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.130 nvme0n1 00:28:29.130 07:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:29.130 07:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:29.388 Running I/O for 2 seconds... 00:28:31.254 15367.00 IOPS, 60.03 MiB/s [2024-11-20T07:25:55.457Z] 16573.50 IOPS, 64.74 MiB/s 00:28:31.254 Latency(us) 00:28:31.254 [2024-11-20T07:25:55.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.254 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:31.254 nvme0n1 : 2.01 16626.60 64.95 0.00 0.00 7695.53 6099.89 18955.03 00:28:31.254 [2024-11-20T07:25:55.457Z] =================================================================================================================== 00:28:31.254 [2024-11-20T07:25:55.457Z] Total : 16626.60 64.95 0.00 0.00 7695.53 6099.89 18955.03 00:28:31.254 { 00:28:31.254 "results": [ 00:28:31.254 { 00:28:31.254 "job": "nvme0n1", 00:28:31.254 "core_mask": "0x2", 00:28:31.254 "workload": "randread", 00:28:31.254 "status": "finished", 00:28:31.254 "queue_depth": 128, 00:28:31.254 "io_size": 4096, 00:28:31.254 "runtime": 2.008949, 00:28:31.254 "iops": 16626.60425924202, 00:28:31.254 "mibps": 64.94767288766414, 00:28:31.254 "io_failed": 0, 00:28:31.254 "io_timeout": 0, 00:28:31.254 "avg_latency_us": 7695.531513267284, 00:28:31.254 "min_latency_us": 6099.88923076923, 00:28:31.254 "max_latency_us": 18955.027692307693 00:28:31.254 } 00:28:31.254 ], 00:28:31.254 "core_count": 1 00:28:31.254 } 00:28:31.254 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:31.254 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:31.254 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:31.254 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:31.254 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:31.254 | select(.opcode=="crc32c") 00:28:31.254 | "\(.module_name) \(.executed)"' 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78684 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 78684 ']' 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 78684 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78684 00:28:31.512 killing process with pid 78684 00:28:31.512 Received shutdown signal, test time was about 2.000000 seconds 00:28:31.512 00:28:31.512 Latency(us) 00:28:31.512 [2024-11-20T07:25:55.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.512 [2024-11-20T07:25:55.715Z] =================================================================================================================== 00:28:31.512 [2024-11-20T07:25:55.715Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78684' 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 78684 00:28:31.512 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 78684 00:28:31.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78738 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78738 /var/tmp/bperf.sock 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 78738 ']' 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.769 [2024-11-20 07:25:55.780971] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:31.769 [2024-11-20 07:25:55.781162] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:28:31.769 Zero copy mechanism will not be used. 00:28:31.769 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78738 ] 00:28:31.769 [2024-11-20 07:25:55.916541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.769 [2024-11-20 07:25:55.946625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.703 07:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.703 07:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:32.703 07:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:32.703 07:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:32.703 07:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:32.703 [2024-11-20 07:25:56.804164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:32.703 07:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.703 07:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.960 nvme0n1 00:28:32.960 07:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:32.960 07:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:33.217 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:33.217 Zero copy mechanism will not be used. 00:28:33.217 Running I/O for 2 seconds... 00:28:35.120 11936.00 IOPS, 1492.00 MiB/s [2024-11-20T07:25:59.323Z] 11952.00 IOPS, 1494.00 MiB/s 00:28:35.120 Latency(us) 00:28:35.120 [2024-11-20T07:25:59.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.120 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:35.120 nvme0n1 : 2.00 11947.28 1493.41 0.00 0.00 1336.83 1272.91 2810.49 00:28:35.120 [2024-11-20T07:25:59.323Z] =================================================================================================================== 00:28:35.120 [2024-11-20T07:25:59.323Z] Total : 11947.28 1493.41 0.00 0.00 1336.83 1272.91 2810.49 00:28:35.120 { 00:28:35.120 "results": [ 00:28:35.120 { 00:28:35.120 "job": "nvme0n1", 00:28:35.120 "core_mask": "0x2", 00:28:35.120 "workload": "randread", 00:28:35.120 "status": "finished", 00:28:35.120 "queue_depth": 16, 00:28:35.120 "io_size": 131072, 00:28:35.120 "runtime": 2.002129, 00:28:35.120 "iops": 11947.282118185192, 00:28:35.120 "mibps": 1493.410264773149, 00:28:35.120 "io_failed": 0, 00:28:35.120 "io_timeout": 0, 00:28:35.120 "avg_latency_us": 1336.8345027013122, 00:28:35.120 "min_latency_us": 1272.9107692307691, 00:28:35.120 "max_latency_us": 2810.4861538461537 00:28:35.120 } 00:28:35.120 ], 00:28:35.120 "core_count": 1 00:28:35.120 } 00:28:35.120 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:35.120 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:35.120 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:35.120 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:35.120 | select(.opcode=="crc32c") 00:28:35.120 | "\(.module_name) \(.executed)"' 00:28:35.120 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78738 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 78738 ']' 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 78738 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78738 00:28:35.403 killing process with pid 78738 00:28:35.403 Received shutdown signal, test time was about 2.000000 seconds 00:28:35.403 00:28:35.403 Latency(us) 00:28:35.403 [2024-11-20T07:25:59.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.403 [2024-11-20T07:25:59.606Z] =================================================================================================================== 00:28:35.403 [2024-11-20T07:25:59.606Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78738' 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 78738 00:28:35.403 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 78738 00:28:35.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78793 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78793 /var/tmp/bperf.sock 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 78793 ']' 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.404 07:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:35.404 [2024-11-20 07:25:59.555828] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:35.404 [2024-11-20 07:25:59.556020] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78793 ] 00:28:35.661 [2024-11-20 07:25:59.691485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.661 [2024-11-20 07:25:59.721174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.227 07:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.227 07:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:36.227 07:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:36.227 07:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:36.227 07:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:36.484 [2024-11-20 07:26:00.584904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:36.484 07:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.484 07:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.743 nvme0n1 00:28:36.743 07:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:36.743 07:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:37.000 Running I/O for 2 seconds... 00:28:38.868 21591.00 IOPS, 84.34 MiB/s [2024-11-20T07:26:03.071Z] 21717.50 IOPS, 84.83 MiB/s 00:28:38.868 Latency(us) 00:28:38.868 [2024-11-20T07:26:03.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.868 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:38.868 nvme0n1 : 2.01 21698.78 84.76 0.00 0.00 5894.18 5116.85 11695.66 00:28:38.868 [2024-11-20T07:26:03.071Z] =================================================================================================================== 00:28:38.868 [2024-11-20T07:26:03.071Z] Total : 21698.78 84.76 0.00 0.00 5894.18 5116.85 11695.66 00:28:38.868 { 00:28:38.868 "results": [ 00:28:38.868 { 00:28:38.868 "job": "nvme0n1", 00:28:38.868 "core_mask": "0x2", 00:28:38.868 "workload": "randwrite", 00:28:38.868 "status": "finished", 00:28:38.868 "queue_depth": 128, 00:28:38.868 "io_size": 4096, 00:28:38.868 "runtime": 2.007624, 00:28:38.868 "iops": 21698.784234498093, 00:28:38.868 "mibps": 84.76087591600817, 00:28:38.868 "io_failed": 0, 00:28:38.868 "io_timeout": 0, 00:28:38.868 "avg_latency_us": 5894.184785041647, 00:28:38.868 "min_latency_us": 5116.84923076923, 00:28:38.868 "max_latency_us": 11695.655384615384 00:28:38.868 } 00:28:38.868 ], 00:28:38.868 "core_count": 1 00:28:38.868 } 00:28:38.868 07:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:38.868 07:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:38.868 07:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:38.868 07:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:38.868 | select(.opcode=="crc32c") 00:28:38.868 | "\(.module_name) \(.executed)"' 00:28:38.868 07:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:39.126 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:39.126 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:39.126 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:39.126 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:39.126 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78793 00:28:39.126 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 78793 ']' 00:28:39.126 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 78793 00:28:39.126 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:39.126 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.126 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78793 00:28:39.126 killing process with pid 78793 00:28:39.126 Received shutdown signal, test time was about 2.000000 seconds 00:28:39.126 00:28:39.126 Latency(us) 00:28:39.126 [2024-11-20T07:26:03.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.126 [2024-11-20T07:26:03.329Z] =================================================================================================================== 00:28:39.126 [2024-11-20T07:26:03.329Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.127 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:39.127 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:39.127 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78793' 00:28:39.127 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 78793 00:28:39.127 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 78793 00:28:39.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78854 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78854 /var/tmp/bperf.sock 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 78854 ']' 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.385 07:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.385 [2024-11-20 07:26:03.360259] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:39.385 [2024-11-20 07:26:03.360450] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78854 ] 00:28:39.385 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:39.385 Zero copy mechanism will not be used. 00:28:39.385 [2024-11-20 07:26:03.488008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.385 [2024-11-20 07:26:03.517389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.320 07:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.320 07:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:40.320 07:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:40.320 07:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:40.320 07:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:40.320 [2024-11-20 07:26:04.413403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:40.320 07:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.320 07:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.578 nvme0n1 00:28:40.578 07:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:40.578 07:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.837 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:40.837 Zero copy mechanism will not be used. 00:28:40.837 Running I/O for 2 seconds... 00:28:42.707 11226.00 IOPS, 1403.25 MiB/s [2024-11-20T07:26:06.910Z] 11248.50 IOPS, 1406.06 MiB/s 00:28:42.707 Latency(us) 00:28:42.707 [2024-11-20T07:26:06.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.707 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:42.707 nvme0n1 : 2.00 11243.30 1405.41 0.00 0.00 1420.22 1083.86 4108.60 00:28:42.707 [2024-11-20T07:26:06.910Z] =================================================================================================================== 00:28:42.707 [2024-11-20T07:26:06.910Z] Total : 11243.30 1405.41 0.00 0.00 1420.22 1083.86 4108.60 00:28:42.707 { 00:28:42.707 "results": [ 00:28:42.707 { 00:28:42.707 "job": "nvme0n1", 00:28:42.707 "core_mask": "0x2", 00:28:42.707 "workload": "randwrite", 00:28:42.707 "status": "finished", 00:28:42.707 "queue_depth": 16, 00:28:42.707 "io_size": 131072, 00:28:42.707 "runtime": 2.002348, 00:28:42.707 "iops": 11243.300365371055, 00:28:42.707 "mibps": 1405.4125456713818, 00:28:42.707 "io_failed": 0, 00:28:42.707 "io_timeout": 0, 00:28:42.707 "avg_latency_us": 1420.2185026770858, 00:28:42.707 "min_latency_us": 1083.8646153846155, 00:28:42.707 "max_latency_us": 4108.6030769230765 00:28:42.707 } 00:28:42.707 ], 00:28:42.707 "core_count": 1 00:28:42.707 } 00:28:42.707 07:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:42.707 07:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:42.707 07:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:42.707 | select(.opcode=="crc32c") 00:28:42.707 | "\(.module_name) \(.executed)"' 00:28:42.707 07:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:42.707 07:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:42.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:42.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:42.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:42.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:42.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78854 00:28:42.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 78854 ']' 00:28:42.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 78854 00:28:42.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78854 00:28:42.967 killing process with pid 78854 00:28:42.967 Received shutdown signal, test time was about 2.000000 seconds 00:28:42.967 00:28:42.967 Latency(us) 00:28:42.967 [2024-11-20T07:26:07.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.967 [2024-11-20T07:26:07.170Z] =================================================================================================================== 00:28:42.967 [2024-11-20T07:26:07.170Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78854' 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 78854 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 78854 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 78652 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 78652 ']' 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 78652 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78652 00:28:43.225 killing process with pid 78652 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78652' 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 78652 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 78652 00:28:43.225 00:28:43.225 real 0m16.376s 00:28:43.225 user 0m31.760s 00:28:43.225 sys 0m3.500s 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:43.225 ************************************ 00:28:43.225 END TEST nvmf_digest_clean 00:28:43.225 ************************************ 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:43.225 ************************************ 00:28:43.225 START TEST nvmf_digest_error 00:28:43.225 ************************************ 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@328 -- # nvmfpid=78932 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@329 -- # waitforlisten 78932 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 78932 ']' 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.225 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.226 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.226 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.226 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.226 07:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:43.226 [2024-11-20 07:26:07.362758] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:43.226 [2024-11-20 07:26:07.362815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.483 [2024-11-20 07:26:07.497990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.483 [2024-11-20 07:26:07.526804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.483 [2024-11-20 07:26:07.526835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.483 [2024-11-20 07:26:07.526841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.483 [2024-11-20 07:26:07.526844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.483 [2024-11-20 07:26:07.526848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.483 [2024-11-20 07:26:07.527054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.049 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.049 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:44.049 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:44.049 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.049 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.308 [2024-11-20 07:26:08.263368] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.308 [2024-11-20 07:26:08.299031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:44.308 null0 00:28:44.308 [2024-11-20 07:26:08.335693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.308 [2024-11-20 07:26:08.359755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=78964 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 78964 /var/tmp/bperf.sock 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 78964 ']' 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.308 07:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.308 [2024-11-20 07:26:08.399794] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:44.308 [2024-11-20 07:26:08.399850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78964 ] 00:28:44.566 [2024-11-20 07:26:08.535366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.566 [2024-11-20 07:26:08.565730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.566 [2024-11-20 07:26:08.593197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:45.132 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.132 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:45.132 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:45.132 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:45.389 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:45.389 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.389 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.389 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.389 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.389 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.646 nvme0n1 00:28:45.646 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:45.646 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.646 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.646 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.646 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:45.646 07:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:45.903 Running I/O for 2 seconds... 00:28:45.903 [2024-11-20 07:26:09.874716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.903 [2024-11-20 07:26:09.875081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-20 07:26:09.875243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.903 [2024-11-20 07:26:09.888052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.903 [2024-11-20 07:26:09.888203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-20 07:26:09.888256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.903 [2024-11-20 07:26:09.900995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.903 [2024-11-20 07:26:09.901059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-20 07:26:09.901092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.903 [2024-11-20 07:26:09.913800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.903 [2024-11-20 07:26:09.913925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:09.914002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:09.927071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:09.927214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:09.927315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:09.940039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:09.940167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:09.940262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:09.953242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:09.953383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:09.953487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:09.966158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:09.966216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:09.966270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:09.979011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:09.979072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:09.979103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:09.991752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:09.991872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:09.991915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:10.004882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:10.005021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:10.005109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:10.018414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:10.018544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:10.018646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:10.032008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:10.032137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:10.032260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:10.045353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:10.045474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:10.045556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:10.058672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:10.058796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:10.058985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:10.071756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:10.071871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:10.071954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:10.084800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:10.084915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:10.084958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.904 [2024-11-20 07:26:10.097897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:45.904 [2024-11-20 07:26:10.098026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.904 [2024-11-20 07:26:10.098115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.111072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.111198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.163 [2024-11-20 07:26:10.111308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.124040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.124162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.163 [2024-11-20 07:26:10.124259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.136907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.137050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.163 [2024-11-20 07:26:10.137132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.149886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.150008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.163 [2024-11-20 07:26:10.150115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.162839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.162960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.163 [2024-11-20 07:26:10.163072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.175722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.175847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.163 [2024-11-20 07:26:10.175932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.188840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.188956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.163 [2024-11-20 07:26:10.189040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.201668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.201790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.163 [2024-11-20 07:26:10.201871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.214563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.214683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.163 [2024-11-20 07:26:10.214768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.227780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.227910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.163 [2024-11-20 07:26:10.227994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.240823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.240950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.163 [2024-11-20 07:26:10.241024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.253780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.253837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.163 [2024-11-20 07:26:10.253872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.163 [2024-11-20 07:26:10.266489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.163 [2024-11-20 07:26:10.266594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-20 07:26:10.266641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 [2024-11-20 07:26:10.279300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.164 [2024-11-20 07:26:10.279419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-20 07:26:10.279453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 [2024-11-20 07:26:10.292096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.164 [2024-11-20 07:26:10.292208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-20 07:26:10.292264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 [2024-11-20 07:26:10.305119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.164 [2024-11-20 07:26:10.305262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-20 07:26:10.305356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 [2024-11-20 07:26:10.318450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.164 [2024-11-20 07:26:10.318573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-20 07:26:10.318651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 [2024-11-20 07:26:10.331728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.164 [2024-11-20 07:26:10.331852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-20 07:26:10.331935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 [2024-11-20 07:26:10.344857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.164 [2024-11-20 07:26:10.344979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-20 07:26:10.345085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 [2024-11-20 07:26:10.357754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.164 [2024-11-20 07:26:10.357867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-20 07:26:10.357945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.370737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.370855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.370942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.384094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.384215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.384312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.396956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.397106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.397298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.410139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.410234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.410342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.422967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.423055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.423096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.435734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.435818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.435857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.448490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.448570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.448578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.461174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.461197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.461202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.474040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.474062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.474068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.486793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.486813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.486819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.499414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.499438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.499444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.512289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.512307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.512313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.524897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.524916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.524921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.537602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.537621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.537626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.550198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.550217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.423 [2024-11-20 07:26:10.550229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.423 [2024-11-20 07:26:10.562828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.423 [2024-11-20 07:26:10.562847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.424 [2024-11-20 07:26:10.562853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.424 [2024-11-20 07:26:10.575468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.424 [2024-11-20 07:26:10.575487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.424 [2024-11-20 07:26:10.575493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.424 [2024-11-20 07:26:10.588078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.424 [2024-11-20 07:26:10.588097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.424 [2024-11-20 07:26:10.588102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.424 [2024-11-20 07:26:10.600714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.424 [2024-11-20 07:26:10.600733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.424 [2024-11-20 07:26:10.600738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.424 [2024-11-20 07:26:10.613314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.424 [2024-11-20 07:26:10.613333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.424 [2024-11-20 07:26:10.613338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.626123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.626142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.683 [2024-11-20 07:26:10.626147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.639073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.639090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.683 [2024-11-20 07:26:10.639096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.652013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.652032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.683 [2024-11-20 07:26:10.652038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.664890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.664908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.683 [2024-11-20 07:26:10.664914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.677816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.677835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.683 [2024-11-20 07:26:10.677840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.695907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.695927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.683 [2024-11-20 07:26:10.695932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.708518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.708537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.683 [2024-11-20 07:26:10.708542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.721187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.721207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.683 [2024-11-20 07:26:10.721212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.734128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.734151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.683 [2024-11-20 07:26:10.734157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.747084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.747104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.683 [2024-11-20 07:26:10.747109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.759962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.759989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.683 [2024-11-20 07:26:10.759996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.772774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.772801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.683 [2024-11-20 07:26:10.772809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.683 [2024-11-20 07:26:10.785520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.683 [2024-11-20 07:26:10.785546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.684 [2024-11-20 07:26:10.785552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.684 [2024-11-20 07:26:10.798170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.684 [2024-11-20 07:26:10.798195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.684 [2024-11-20 07:26:10.798201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.684 [2024-11-20 07:26:10.810877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.684 [2024-11-20 07:26:10.810907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.684 [2024-11-20 07:26:10.810914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.684 [2024-11-20 07:26:10.823706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.684 [2024-11-20 07:26:10.823730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.684 [2024-11-20 07:26:10.823736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.684 [2024-11-20 07:26:10.836735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.684 [2024-11-20 07:26:10.836757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.684 [2024-11-20 07:26:10.836763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.684 [2024-11-20 07:26:10.849750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.684 [2024-11-20 07:26:10.849769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.684 [2024-11-20 07:26:10.849775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.684 19482.00 IOPS, 76.10 MiB/s [2024-11-20T07:26:10.887Z] [2024-11-20 07:26:10.863632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.684 [2024-11-20 07:26:10.863651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.684 [2024-11-20 07:26:10.863657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.684 [2024-11-20 07:26:10.876464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.684 [2024-11-20 07:26:10.876483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.684 [2024-11-20 07:26:10.876488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:10.889369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:10.889388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:10.889394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:10.902255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:10.902273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:10.902279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:10.915091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:10.915109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:10.915114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:10.927739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:10.927758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:10.927763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:10.940375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:10.940394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:10.940400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:10.953000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:10.953019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:10.953025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:10.965641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:10.965660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:10.965665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:10.978275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:10.978294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:10.978299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:10.990899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:10.990918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:10.990924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:11.003511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:11.003530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:11.003535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:11.016136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:11.016157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:11.016163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:11.029055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:11.029078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:11.029083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:11.041704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:11.041726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:11.041731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:11.054357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:11.054378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:11.054384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:11.067010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:11.067031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:11.067036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:11.079634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:11.079653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:11.079658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:11.092252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:11.092272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:11.092277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:11.104860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:11.104880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:11.104885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:11.117480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:11.117499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:11.117505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:11.130099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:11.130118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:11.130124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.943 [2024-11-20 07:26:11.142744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:46.943 [2024-11-20 07:26:11.142762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.943 [2024-11-20 07:26:11.142768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.202 [2024-11-20 07:26:11.155354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.202 [2024-11-20 07:26:11.155373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.202 [2024-11-20 07:26:11.155378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.202 [2024-11-20 07:26:11.167955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.202 [2024-11-20 07:26:11.167974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.202 [2024-11-20 07:26:11.167979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.202 [2024-11-20 07:26:11.180600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.202 [2024-11-20 07:26:11.180619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.202 [2024-11-20 07:26:11.180625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.202 [2024-11-20 07:26:11.193217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.202 [2024-11-20 07:26:11.193243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.202 [2024-11-20 07:26:11.193249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.202 [2024-11-20 07:26:11.205822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.202 [2024-11-20 07:26:11.205841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.202 [2024-11-20 07:26:11.205846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.202 [2024-11-20 07:26:11.218720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.202 [2024-11-20 07:26:11.218739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.202 [2024-11-20 07:26:11.218745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.231572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.231590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.231596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.244549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.244568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.244574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.257276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.257295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.257301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.269889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.269908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.269914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.282709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.282728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.282733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.295333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.295351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.295356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.307939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.307959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.307964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.320574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.320593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.320598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.333192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.333213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.333218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.346120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.346140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.346145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.359129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.359148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.359154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.372114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.372133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.372139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.385016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.385035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.385041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.203 [2024-11-20 07:26:11.397632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.203 [2024-11-20 07:26:11.397651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.203 [2024-11-20 07:26:11.397657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.410436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.410455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.410460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.423138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.423158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.423163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.435742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.435767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.435772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.448340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.448359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.448364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.460937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.460956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.460962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.473689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.473708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.473713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.486290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.486309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.486314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.498903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.498921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.498926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.516983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.517003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.517008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.529799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.529818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.529824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.542716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.542734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.542739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.555391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.555409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.555415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.568089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.568108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.568114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.580965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.580988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.580994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.593590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.593692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.593700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.606306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.462 [2024-11-20 07:26:11.606328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.462 [2024-11-20 07:26:11.606333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.462 [2024-11-20 07:26:11.619299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.463 [2024-11-20 07:26:11.619320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.463 [2024-11-20 07:26:11.619325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.463 [2024-11-20 07:26:11.632125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.463 [2024-11-20 07:26:11.632148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.463 [2024-11-20 07:26:11.632154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.463 [2024-11-20 07:26:11.644736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.463 [2024-11-20 07:26:11.644816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.463 [2024-11-20 07:26:11.644823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.463 [2024-11-20 07:26:11.657520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.463 [2024-11-20 07:26:11.657542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.463 [2024-11-20 07:26:11.657547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.721 [2024-11-20 07:26:11.670472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.721 [2024-11-20 07:26:11.670493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.721 [2024-11-20 07:26:11.670499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.721 [2024-11-20 07:26:11.683393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.683472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.683479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.696171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.696194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.696199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.708798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.708820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.708825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.721437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.721458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.721464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.734042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.734064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.734070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.746666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.746687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.746693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.759273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.759295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.759301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.771881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.771903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.771908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.784812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.784834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.784839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.797766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.797854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.797861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.810505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.810529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.810535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.823164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.823190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.823196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.835867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.835890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.835896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 [2024-11-20 07:26:11.848525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19fa2c0) 00:28:47.722 [2024-11-20 07:26:11.848545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.722 [2024-11-20 07:26:11.848551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.722 19671.50 IOPS, 76.84 MiB/s 00:28:47.722 Latency(us) 00:28:47.722 [2024-11-20T07:26:11.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.722 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:47.722 nvme0n1 : 2.00 19698.90 76.95 0.00 0.00 6494.11 5948.65 24399.56 00:28:47.722 [2024-11-20T07:26:11.925Z] =================================================================================================================== 00:28:47.722 [2024-11-20T07:26:11.925Z] Total : 19698.90 76.95 0.00 0.00 6494.11 5948.65 24399.56 00:28:47.722 { 00:28:47.722 "results": [ 00:28:47.722 { 00:28:47.722 "job": "nvme0n1", 00:28:47.722 "core_mask": "0x2", 00:28:47.722 "workload": "randread", 00:28:47.722 "status": "finished", 00:28:47.722 "queue_depth": 128, 00:28:47.722 "io_size": 4096, 00:28:47.722 "runtime": 2.003716, 00:28:47.722 "iops": 19698.899444831502, 00:28:47.722 "mibps": 76.94882595637306, 00:28:47.722 "io_failed": 0, 00:28:47.722 "io_timeout": 0, 00:28:47.722 "avg_latency_us": 6494.11159944107, 00:28:47.722 "min_latency_us": 5948.652307692308, 00:28:47.722 "max_latency_us": 24399.55692307692 00:28:47.722 } 00:28:47.722 ], 00:28:47.722 "core_count": 1 00:28:47.722 } 00:28:47.722 07:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:47.722 07:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:47.722 07:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:47.722 07:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:47.722 | .driver_specific 00:28:47.722 | .nvme_error 00:28:47.722 | .status_code 00:28:47.722 | .command_transient_transport_error' 00:28:47.980 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 154 > 0 )) 00:28:47.980 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 78964 00:28:47.980 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 78964 ']' 00:28:47.980 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 78964 00:28:47.980 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:47.980 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.980 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78964 00:28:47.981 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:47.981 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:47.981 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78964' 00:28:47.981 killing process with pid 78964 00:28:47.981 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 78964 00:28:47.981 Received shutdown signal, test time was about 2.000000 seconds 00:28:47.981 00:28:47.981 Latency(us) 00:28:47.981 [2024-11-20T07:26:12.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.981 [2024-11-20T07:26:12.184Z] =================================================================================================================== 00:28:47.981 [2024-11-20T07:26:12.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.981 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 78964 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:48.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79018 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79018 /var/tmp/bperf.sock 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79018 ']' 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.239 07:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.239 [2024-11-20 07:26:12.232313] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:48.239 [2024-11-20 07:26:12.232483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:28:48.239 Zero copy mechanism will not be used. 00:28:48.239 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79018 ] 00:28:48.239 [2024-11-20 07:26:12.365216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.239 [2024-11-20 07:26:12.396501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.239 [2024-11-20 07:26:12.425036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:49.173 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.173 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:49.173 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:49.173 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:49.173 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:49.173 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.173 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.173 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.173 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.173 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.432 nvme0n1 00:28:49.432 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:49.432 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.432 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.432 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.432 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:49.432 07:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:49.692 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:49.692 Zero copy mechanism will not be used. 00:28:49.692 Running I/O for 2 seconds... 00:28:49.692 [2024-11-20 07:26:13.697420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.697462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.697470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.700524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.700554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.700561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.703440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.703467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.703473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.706364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.706402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.706408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.709261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.709287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.709293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.712167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.712195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.712201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.715125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.715151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.715157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.718051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.718166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.718175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.721049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.721072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.721078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.723996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.724023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.724028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.726908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.726933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.726939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.729835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.729939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.729947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.732842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.732869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.732875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.735733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.735760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.735766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.738664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.738690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.738695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.741563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.741589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.741595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.744444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.744535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.744543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.747427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.747454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.747460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.750335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.750359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.750365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.753245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.753268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.753274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.756130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.756237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.692 [2024-11-20 07:26:13.756245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.692 [2024-11-20 07:26:13.759097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.692 [2024-11-20 07:26:13.759122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.759128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.762014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.762039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.762045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.764936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.764961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.764967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.767854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.767948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.767956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.770858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.770885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.770890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.773767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.773792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.773798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.776712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.776738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.776744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.779625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.779651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.779656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.782536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.782627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.782635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.785505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.785532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.785537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.788415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.788441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.788447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.791310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.791334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.791340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.794208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.794331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.794338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.797174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.797197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.797203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.800076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.800102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.800107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.802977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.803003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.803009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.805844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.805936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.805943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.808845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.808868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.808874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.811718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.811744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.811750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.814603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.814629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.814635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.817511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.817536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.817542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.820412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.820504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.820511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.823392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.823418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.823423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.826272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.693 [2024-11-20 07:26:13.826291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.693 [2024-11-20 07:26:13.826296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.693 [2024-11-20 07:26:13.829235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.829258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.829264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.832130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.832235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.832243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.835121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.835147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.835152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.838025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.838050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.838056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.840954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.840979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.840985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.843864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.843952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.843959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.846828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.846853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.846859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.849771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.849796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.849802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.852696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.852722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.852727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.855569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.855594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.855600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.858480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.858573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.858580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.861465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.861490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.861496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.864342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.864368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.864373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.867236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.867259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.867264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.870128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.870218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.870239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.873075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.873097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.873103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.875964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.875990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.875995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.878868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.878894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.878900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.881794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.881884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.881891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.884794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.884819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.884825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.694 [2024-11-20 07:26:13.887694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.694 [2024-11-20 07:26:13.887720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.694 [2024-11-20 07:26:13.887726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.955 [2024-11-20 07:26:13.890561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.955 [2024-11-20 07:26:13.890586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.955 [2024-11-20 07:26:13.890592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.955 [2024-11-20 07:26:13.893446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.955 [2024-11-20 07:26:13.893471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.955 [2024-11-20 07:26:13.893477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.955 [2024-11-20 07:26:13.896335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.955 [2024-11-20 07:26:13.896359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.955 [2024-11-20 07:26:13.896365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.955 [2024-11-20 07:26:13.899186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.955 [2024-11-20 07:26:13.899211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.955 [2024-11-20 07:26:13.899217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.955 [2024-11-20 07:26:13.902132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.955 [2024-11-20 07:26:13.902158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.955 [2024-11-20 07:26:13.902164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.955 [2024-11-20 07:26:13.905067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.955 [2024-11-20 07:26:13.905093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.955 [2024-11-20 07:26:13.905099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.955 [2024-11-20 07:26:13.907996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.955 [2024-11-20 07:26:13.908086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.955 [2024-11-20 07:26:13.908094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.955 [2024-11-20 07:26:13.910932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.955 [2024-11-20 07:26:13.910957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.955 [2024-11-20 07:26:13.910963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.955 [2024-11-20 07:26:13.913750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.955 [2024-11-20 07:26:13.913775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.955 [2024-11-20 07:26:13.913781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.955 [2024-11-20 07:26:13.916668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.955 [2024-11-20 07:26:13.916695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.955 [2024-11-20 07:26:13.916701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.955 [2024-11-20 07:26:13.919605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.919696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.919704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.922593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.922620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.922626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.925463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.925489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.925494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.928375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.928400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.928406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.931277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.931301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.931318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.934175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.934202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.934208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.937068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.937093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.937099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.939968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.939995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.940000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.942907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.942933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.942939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.945829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.945924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.945932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.948824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.948851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.948857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.951697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.951723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.951729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.954599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.954624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.954630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.957481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.957507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.957513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.960403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.960428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.960434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.963312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.963336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.963342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.966210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.966246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.966252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.969127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.969152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.969158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.972024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.972117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.972125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.975008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.975035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.975041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.977933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.977959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.977964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.980860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.980886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.956 [2024-11-20 07:26:13.980892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.956 [2024-11-20 07:26:13.983750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.956 [2024-11-20 07:26:13.983839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:13.983847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:13.986758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:13.986785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:13.986791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:13.989683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:13.989708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:13.989714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:13.992601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:13.992627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:13.992632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:13.995520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:13.995546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:13.995552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:13.998421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:13.998510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:13.998517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.001362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.001387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.001393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.004171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.004197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.004202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.007080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.007106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.007112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.010017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.010108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.010116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.013010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.013034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.013040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.015846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.015871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.015877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.018716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.018741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.018747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.021588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.021678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.021686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.024584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.024610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.024616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.027471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.027497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.027503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.030363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.030392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.030398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.033298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.033321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.033327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.036241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.036271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.036276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.039150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.039176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.039182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.042042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.042066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.042072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.044959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.957 [2024-11-20 07:26:14.044982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.957 [2024-11-20 07:26:14.044988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.957 [2024-11-20 07:26:14.047837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.047928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.047935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.050834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.050858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.050864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.053731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.053826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.053874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.056781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.056874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.056925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.059829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.059923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.059961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.062861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.062946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.062954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.065822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.065841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.065847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.068732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.068824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.068868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.071793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.071886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.071932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.074808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.074899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.074949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.077787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.077877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.077922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.080823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.080915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.080959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.083914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.084000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.084068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.087055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.087151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.087198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.090077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.090170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.090217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.093191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.093292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.093341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.096290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.096379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.096423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.099294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.099385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.099434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.102320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.102416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.102512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.105302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.105390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.105438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.108267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.108355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.108400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.111230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.111317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.111359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.114211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.114314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.114356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.958 [2024-11-20 07:26:14.117258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.958 [2024-11-20 07:26:14.117349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.958 [2024-11-20 07:26:14.117393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.959 [2024-11-20 07:26:14.120303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.959 [2024-11-20 07:26:14.120393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.959 [2024-11-20 07:26:14.120438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.959 [2024-11-20 07:26:14.123396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.959 [2024-11-20 07:26:14.123489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.959 [2024-11-20 07:26:14.123535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.959 [2024-11-20 07:26:14.126438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.959 [2024-11-20 07:26:14.126533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.959 [2024-11-20 07:26:14.126577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.959 [2024-11-20 07:26:14.129478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.959 [2024-11-20 07:26:14.129569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.959 [2024-11-20 07:26:14.129615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.959 [2024-11-20 07:26:14.132540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.959 [2024-11-20 07:26:14.132632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.959 [2024-11-20 07:26:14.132671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.959 [2024-11-20 07:26:14.135568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.959 [2024-11-20 07:26:14.135595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.959 [2024-11-20 07:26:14.135601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.959 [2024-11-20 07:26:14.138448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.959 [2024-11-20 07:26:14.138473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.959 [2024-11-20 07:26:14.138479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.959 [2024-11-20 07:26:14.141342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.959 [2024-11-20 07:26:14.141367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.959 [2024-11-20 07:26:14.141373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.959 [2024-11-20 07:26:14.144214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.959 [2024-11-20 07:26:14.144247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.959 [2024-11-20 07:26:14.144253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.959 [2024-11-20 07:26:14.147095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.959 [2024-11-20 07:26:14.147120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.959 [2024-11-20 07:26:14.147126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.959 [2024-11-20 07:26:14.149981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:49.959 [2024-11-20 07:26:14.150007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.959 [2024-11-20 07:26:14.150013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.219 [2024-11-20 07:26:14.152910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.219 [2024-11-20 07:26:14.153002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.153010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.155868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.155895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.155901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.158776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.158801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.158807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.161693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.161719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.161725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.164568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.164593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.164599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.167482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.167573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.167581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.170443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.170468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.170474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.173399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.173424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.173430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.176289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.176313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.176319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.179135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.179236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.179243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.182093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.182115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.182121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.185003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.185028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.185034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.187845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.187870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.187876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.190698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.190785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.190793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.193587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.193612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.193618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.196401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.196426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.196432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.199218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.199250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.199256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.202072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.202160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.202168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.204957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.204984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.204989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.207771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.207797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.207802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.210611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.210635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.210641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.213446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.213534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.213541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.216377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.216402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.216408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.219259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.219282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.219288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.222142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.222168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.222174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.225048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.225135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.225142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.220 [2024-11-20 07:26:14.227996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.220 [2024-11-20 07:26:14.228022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.220 [2024-11-20 07:26:14.228028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.230922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.230947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.230953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.233815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.233840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.233846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.236703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.236790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.236798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.239657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.239681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.239687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.242599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.242625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.242632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.245448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.245473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.245479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.248240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.248262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.248268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.251078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.251104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.251109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.253932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.253957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.253963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.256784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.256810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.256816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.259698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.259790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.259798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.262708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.262733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.262739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.265590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.265616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.265622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.268492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.268517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.268522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.271352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.271378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.271384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.274173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.274198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.274204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.277035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.277061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.277067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.279896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.279921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.279927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.282732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.282821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.282829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.285606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.285631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.285637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.288442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.288467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.288473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.291354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.291378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.291383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.294136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.294235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.294243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.297096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.297121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.297127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.299966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.299992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.299997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.302863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.302889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.302895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.221 [2024-11-20 07:26:14.305735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.221 [2024-11-20 07:26:14.305822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.221 [2024-11-20 07:26:14.305830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.308646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.308671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.308677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.311566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.311592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.311598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.314479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.314503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.314509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.317479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.317504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.317510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.320344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.320368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.320374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.323275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.323299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.323306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.326144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.326169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.326175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.329060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.329085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.329091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.331957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.332050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.332059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.334939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.334966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.334971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.337864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.337889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.337895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.340747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.340772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.340778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.343674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.343699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.343705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.346581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.346672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.346679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.349574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.349600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.349606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.352517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.352543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.352548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.355402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.355426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.355432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.358295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.358319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.358325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.361198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.361235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.361241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.364121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.364148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.364153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.367012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.367037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.367043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.369941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.370031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.370039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.372934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.372960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.372967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.375849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.375874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.375880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.378756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.378781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.378787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.381652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.381677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.381683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.222 [2024-11-20 07:26:14.384552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.222 [2024-11-20 07:26:14.384647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.222 [2024-11-20 07:26:14.384654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.223 [2024-11-20 07:26:14.387550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.223 [2024-11-20 07:26:14.387576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.223 [2024-11-20 07:26:14.387582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.223 [2024-11-20 07:26:14.390471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.223 [2024-11-20 07:26:14.390496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.223 [2024-11-20 07:26:14.390502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.223 [2024-11-20 07:26:14.393366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.223 [2024-11-20 07:26:14.393391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.223 [2024-11-20 07:26:14.393398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.223 [2024-11-20 07:26:14.396289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.223 [2024-11-20 07:26:14.396313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.223 [2024-11-20 07:26:14.396319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.223 [2024-11-20 07:26:14.399172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.223 [2024-11-20 07:26:14.399199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.223 [2024-11-20 07:26:14.399204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.223 [2024-11-20 07:26:14.402081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.223 [2024-11-20 07:26:14.402107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.223 [2024-11-20 07:26:14.402113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.223 [2024-11-20 07:26:14.405033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.223 [2024-11-20 07:26:14.405060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.223 [2024-11-20 07:26:14.405065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.223 [2024-11-20 07:26:14.407962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.223 [2024-11-20 07:26:14.408057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.223 [2024-11-20 07:26:14.408064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.223 [2024-11-20 07:26:14.410939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.223 [2024-11-20 07:26:14.410964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.223 [2024-11-20 07:26:14.410970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.223 [2024-11-20 07:26:14.413860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.223 [2024-11-20 07:26:14.413885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.223 [2024-11-20 07:26:14.413891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.223 [2024-11-20 07:26:14.416756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.223 [2024-11-20 07:26:14.416783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.223 [2024-11-20 07:26:14.416788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.419675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.419701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.419707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.422587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.422679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.422687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.425582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.425607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.425613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.428490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.428516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.428522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.431399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.431425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.431431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.434326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.434351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.434357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.437204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.437243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.437249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.440095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.440121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.440127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.443018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.443043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.443049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.445990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.446079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.446086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.448957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.448983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.448989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.451845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.451870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.451876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.454773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.454799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.454805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.457637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.457728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.457735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.460583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.460608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.460613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.463521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.463547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.463553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.466454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.466478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.466483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.469369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.469394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.469400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.472278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.472303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.472309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.475155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.475181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.475187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.478063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.478089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.478095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.480982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.481007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.481013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.483928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.484021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.484029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.486916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.486942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.484 [2024-11-20 07:26:14.486948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.484 [2024-11-20 07:26:14.489822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.484 [2024-11-20 07:26:14.489848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.489854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.492736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.492762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.492768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.495602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.495627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.495633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.498538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.498626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.498634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.501505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.501531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.501537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.504417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.504442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.504448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.507360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.507384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.507390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.510253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.510276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.510282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.513129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.513154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.513160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.516021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.516047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.516053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.518952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.518977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.518983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.521882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.521973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.521981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.524834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.524856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.524862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.527755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.527781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.527787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.530647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.530673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.530678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.533554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.533580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.533585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.536445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.536535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.536543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.539422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.539448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.539454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.542342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.542373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.542379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.545246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.545270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.545275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.548142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.548167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.548173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.551050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.551143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.551150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.554032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.554057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.554063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.556965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.556991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.556997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.559907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.559933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.559938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.562789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.562880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.562887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.565757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.565779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.565785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.568693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.568719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.568725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.571567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.571592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.571598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.574451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.574475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.574480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.577365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.577389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.577395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.580305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.580329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.580335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.583213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.583248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.583254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.586070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.485 [2024-11-20 07:26:14.586096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.485 [2024-11-20 07:26:14.586101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.485 [2024-11-20 07:26:14.588953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.589049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.589056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.591938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.591961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.591967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.594831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.594857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.594862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.597762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.597787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.597793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.600656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.600682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.600687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.603527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.603619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.603626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.606513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.606538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.606544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.609417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.609442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.609448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.612310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.612334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.612340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.615195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.615296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.615304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.618175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.618197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.618203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.621086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.621112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.621118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.624010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.624037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.624043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.626933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.627023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.627030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.629898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.629924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.629930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.632842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.632867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.632873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.635763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.635789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.635794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.638622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.638648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.638653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.641519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.641609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.641617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.644495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.644520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.644526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.647363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.647389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.647395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.650271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.650294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.650300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.653181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.653277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.653285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.656108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.656130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.656136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.659026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.659052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.659058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.661942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.661968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.661974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.664856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.664947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.664955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.667824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.667847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.667853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.670756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.670782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.670788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.673666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.673692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.673698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.676597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.676623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.676629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.486 [2024-11-20 07:26:14.679468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.486 [2024-11-20 07:26:14.679559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.486 [2024-11-20 07:26:14.679567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.682569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.682594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.682600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.685428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.685453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.685459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.688341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.688366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.688372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.691265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.691288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.691294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.746 10555.00 IOPS, 1319.38 MiB/s [2024-11-20T07:26:14.949Z] [2024-11-20 07:26:14.695455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.695481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.695487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.698410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.698435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.698441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.701312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.701336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.701342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.704183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.704209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.704215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.707163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.707190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.707196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.710100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.710127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.710133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.713030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.713141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.713149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.716065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.716091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.716098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.719008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.719035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.719041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.721960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.721988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.721994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.724917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.725025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.725033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.746 [2024-11-20 07:26:14.727949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.746 [2024-11-20 07:26:14.727973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.746 [2024-11-20 07:26:14.727979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.730880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.730907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.730913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.733842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.733867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.733873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.736724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.736749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.736755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.739670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.739776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.739784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.742660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.742686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.742692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.745589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.745616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.745621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.748528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.748554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.748560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.751473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.751499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.751505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.754402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.754426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.754433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.757333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.757358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.757363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.760260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.760285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.760291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.763186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.763212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.763218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.766088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.766189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.766196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.769109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.769135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.769140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.772023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.772049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.772055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.774914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.774938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.774944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.777786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.777882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.777890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.780783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.780805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.780811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.783701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.783726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.783732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.786561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.786587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.786593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.789365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.789388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.789394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.792200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.792237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.792243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.795123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.795149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.795155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.798001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.798028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.798033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.800911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.801004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.801012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.803841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.803867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.803873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.747 [2024-11-20 07:26:14.806685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.747 [2024-11-20 07:26:14.806710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.747 [2024-11-20 07:26:14.806715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.809520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.809545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.809551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.812353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.812377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.812384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.815189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.815214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.815232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.818025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.818050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.818055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.820884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.820909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.820914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.823717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.823806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.823814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.826662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.826688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.826694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.829478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.829503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.829509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.832325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.832350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.832356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.835163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.835266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.835274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.838068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.838093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.838099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.840905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.840931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.840937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.843762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.843787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.843794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.846577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.846667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.846675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.849463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.849491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.849496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.852319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.852344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.852350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.855147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.855172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.855178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.857967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.858056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.858064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.860892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.860918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.860924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.863722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.863749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.863754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.866558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.866583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.866588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.869397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.869422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.869428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.872206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.872244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.872250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.875100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.875125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.875131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.878003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.878028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.878034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.880934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.881026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.881033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.748 [2024-11-20 07:26:14.883886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.748 [2024-11-20 07:26:14.883911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.748 [2024-11-20 07:26:14.883917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.886805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.886830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.886836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.889646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.889671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.889677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.892480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.892505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.892511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.895301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.895324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.895330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.898093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.898119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.898125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.900957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.900982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.900988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.903796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.903821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.903826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.906600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.906688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.906695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.909471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.909496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.909501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.912304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.912330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.912336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.915146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.915170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.915176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.917974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.918065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.918073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.920918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.920943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.920949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.923773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.923799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.923805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.926635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.926661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.926667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.929448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.929543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.929550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.932360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.932384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.932389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.935198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.935235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.935240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.938056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.938081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.938087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.940892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.940991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.940999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:50.749 [2024-11-20 07:26:14.943897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:50.749 [2024-11-20 07:26:14.943920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.749 [2024-11-20 07:26:14.943926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.946810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.946836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.946842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.949705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.949730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.949736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.952627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.952652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.952658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.955536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.955637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.955645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.958566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.958592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.958598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.961456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.961481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.961487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.964286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.964310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.964316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.967110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.967207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.967215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.970039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.970065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.970071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.972874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.972900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.972906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.975731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.975757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.975762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.978591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.978684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.978692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.981499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.981525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.981531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.984344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.984368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.984374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.987213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.987246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.987252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.990059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.990151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.990159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.993040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.993066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.993072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.995946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.995972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.995978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:14.998858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:14.998883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:14.998889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:15.001763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:15.001853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.010 [2024-11-20 07:26:15.001860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.010 [2024-11-20 07:26:15.004733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.010 [2024-11-20 07:26:15.004755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.004761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.007648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.007675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.007681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.010524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.010549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.010554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.013342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.013366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.013372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.016144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.016169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.016175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.018960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.018985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.018990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.021799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.021825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.021831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.024656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.024746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.024753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.027544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.027569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.027575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.030395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.030421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.030427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.033282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.033306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.033312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.036115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.036205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.036213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.038985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.039011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.039016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.041811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.041837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.041843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.044661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.044686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.044692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.047495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.047583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.047591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.050382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.050406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.050411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.053198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.053231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.053237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.056019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.056045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.056051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.058818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.058907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.058914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.061720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.061746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.061752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.064534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.064558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.064564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.067358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.067382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.067388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.070163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.070264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.070271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.073099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.073122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.073127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.075920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.075946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.075952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.078762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.078788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.011 [2024-11-20 07:26:15.078793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.011 [2024-11-20 07:26:15.081597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.011 [2024-11-20 07:26:15.081684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.081691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.084496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.084521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.084526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.087332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.087356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.087362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.090152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.090177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.090183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.092984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.093072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.093079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.095895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.095921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.095927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.098728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.098753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.098758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.101511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.101536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.101541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.104330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.104356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.104361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.107137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.107162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.107168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.110047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.110132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.110140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.112983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.113073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.113145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.115984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.116076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.116119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.119020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.119115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.119159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.121984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.122075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.122119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.124953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.125044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.125088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.127930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.128022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.128066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.130922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.130949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.130954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.133735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.133760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.133766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.136573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.136599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.136604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.139442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.139466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.139471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.142248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.142271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.142277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.145037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.145061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.145066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.147925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.147949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.147955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.150858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.150948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.150956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.153822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.153846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.153852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.156693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.156785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.012 [2024-11-20 07:26:15.156833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.012 [2024-11-20 07:26:15.159677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.012 [2024-11-20 07:26:15.159766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.159802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.162606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.162629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.162635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.165420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.165443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.165449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.168306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.168394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.168437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.171291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.171381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.171429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.174242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.174327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.174384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.177211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.177311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.177359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.180189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.180285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.180330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.183163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.183262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.183306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.186163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.186260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.186268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.189092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.189115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.189121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.191920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.191943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.191949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.194771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.194794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.194800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.197631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.197715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.197723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.200507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.200530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.200535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.203374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.203463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.203512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.013 [2024-11-20 07:26:15.206387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.013 [2024-11-20 07:26:15.206477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.013 [2024-11-20 07:26:15.206521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.209417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.209508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.209556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.212468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.212560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.212606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.215542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.215635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.215679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.218600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.218679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.218688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.221555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.221578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.221584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.224455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.224540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.224547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.227436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.227455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.227461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.230397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.230488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.230533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.233424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.233514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.233559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.236507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.236598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.236642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.239557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.239649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.239694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.242602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.242694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.242739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.245591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.245682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.245727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.248603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.248693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.248737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.251581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.251673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.251714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.254685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.254778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.254824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.257661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.257752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.257796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.260658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.260749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.260794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.263623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.276 [2024-11-20 07:26:15.263715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.276 [2024-11-20 07:26:15.263760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.276 [2024-11-20 07:26:15.266591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.266679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.266721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.269504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.269595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.269642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.272521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.272612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.272654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.275456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.275547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.275589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.278410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.278501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.278558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.281370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.281460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.281501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.284403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.284493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.284536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.287372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.287465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.287510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.290377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.290468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.290512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.293321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.293410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.293454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.296269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.296362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.296418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.299284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.299372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.299416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.302230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.302315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.302365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.305273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.305362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.305417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.308266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.308292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.308298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.311088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.311113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.311119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.313921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.313947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.313953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.316783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.316871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.316879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.319687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.319707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.319713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.322496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.322520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.322526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.325326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.325350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.325355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.328133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.328232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.328240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.331048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.331074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.331080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.333844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.333869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.333875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.336726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.336752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.336757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.339571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.339660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.339667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.342472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.277 [2024-11-20 07:26:15.342497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.277 [2024-11-20 07:26:15.342502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.277 [2024-11-20 07:26:15.345299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.345323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.345329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.348100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.348125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.348131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.350898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.350983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.350990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.353771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.353797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.353802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.356622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.356648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.356654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.359479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.359504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.359509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.362324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.362355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.362360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.365152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.365177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.365182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.368028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.368054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.368060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.370886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.370910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.370916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.373681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.373769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.373777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.376610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.376635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.376641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.379449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.379474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.379480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.382295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.382318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.382324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.385135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.385232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.385241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.388051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.388077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.388082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.390869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.390894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.390899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.393703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.393727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.393733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.396517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.396604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.396611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.399415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.399440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.399446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.402244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.402267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.402273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.405088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.405114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.405119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.407900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.407985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.407993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.410775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.410799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.410805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.413607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.413632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.413638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.416449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.416474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.416480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.278 [2024-11-20 07:26:15.419339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.278 [2024-11-20 07:26:15.419364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.278 [2024-11-20 07:26:15.419369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.422169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.422194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.422200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.425013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.425038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.425043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.427864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.427889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.427894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.430724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.430811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.430819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.433583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.433609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.433615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.436419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.436445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.436450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.439276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.439299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.439305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.442080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.442167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.442174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.444971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.444996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.445002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.447815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.447840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.447845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.450656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.450681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.450687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.453436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.453522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.453529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.456452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.456478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.456483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.459260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.459283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.459289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.462042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.462067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.462073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.464900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.464986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.464993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.467827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.467848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.467854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.279 [2024-11-20 07:26:15.470669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.279 [2024-11-20 07:26:15.470695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.279 [2024-11-20 07:26:15.470701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.473501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.473525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.473531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.476386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.476410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.476415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.479307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.479330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.479336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.482231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.482254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.482260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.485137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.485163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.485169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.488056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.488082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.488088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.490983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.491073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.491080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.494003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.494029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.494034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.496935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.496960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.496966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.499828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.499853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.499859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.502714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.502805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.502813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.505620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.505646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.505652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.508466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.508491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.508497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.511298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.511322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.511328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.514104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.514193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.514201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.517009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.517035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.517041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.519848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.519873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.519879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.522698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.522724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.522729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.525518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.525606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.525613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.528417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.528442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.528448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.541 [2024-11-20 07:26:15.531299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.541 [2024-11-20 07:26:15.531323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.541 [2024-11-20 07:26:15.531329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.534098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.534123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.534128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.536948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.537034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.537041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.539816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.539838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.539843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.542681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.542706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.542711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.545506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.545531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.545537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.548311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.548334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.548340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.551145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.551172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.551178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.553963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.553988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.553994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.556811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.556836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.556842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.559734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.559829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.559838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.562703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.562729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.562734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.565568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.565594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.565600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.568415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.568439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.568445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.571267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.571291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.571297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.574088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.574112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.574118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.576922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.576947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.576953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.579754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.579779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.579785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.582588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.582676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.582683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.585498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.585523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.585529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.588337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.588360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.588366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.591173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.591198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.591204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.594018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.594108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.594115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.596950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.596977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.596982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.599839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.599865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.599871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.602750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.602776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.602782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.605652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.605740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.605748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.542 [2024-11-20 07:26:15.608652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.542 [2024-11-20 07:26:15.608677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.542 [2024-11-20 07:26:15.608682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.611562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.611587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.611593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.614430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.614454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.614460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.617304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.617328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.617333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.620156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.620181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.620187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.623158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.623255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.623262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.626150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.626176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.626182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.629047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.629135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.629143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.632022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.632044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.632050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.634898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.634924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.634929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.637824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.637849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.637855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.640719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.640806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.640813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.643688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.643710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.643716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.646582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.646608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.646614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.649488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.649513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.649519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.652428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.652454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.652460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.655339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.655364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.655369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.658241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.658265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.658270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.661133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.661159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.661164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.664048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.664073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.664079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.666972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.667064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.667071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.669960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.669985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.669991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.672921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.672947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.672953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.675834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.675860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.675866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.678744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.678834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.678842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.681692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.681717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.681723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.684591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.684616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.543 [2024-11-20 07:26:15.684622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:51.543 [2024-11-20 07:26:15.687517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.543 [2024-11-20 07:26:15.687542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.544 [2024-11-20 07:26:15.687549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:51.544 [2024-11-20 07:26:15.690439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.544 [2024-11-20 07:26:15.690463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.544 [2024-11-20 07:26:15.690468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.544 10633.00 IOPS, 1329.12 MiB/s [2024-11-20T07:26:15.747Z] [2024-11-20 07:26:15.694261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1931400) 00:28:51.544 [2024-11-20 07:26:15.694284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.544 [2024-11-20 07:26:15.694290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:51.544 00:28:51.544 Latency(us) 00:28:51.544 [2024-11-20T07:26:15.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.544 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:51.544 nvme0n1 : 2.00 10628.80 1328.60 0.00 0.00 1502.70 1348.53 7057.72 00:28:51.544 [2024-11-20T07:26:15.747Z] =================================================================================================================== 00:28:51.544 [2024-11-20T07:26:15.747Z] Total : 10628.80 1328.60 0.00 0.00 1502.70 1348.53 7057.72 00:28:51.544 { 00:28:51.544 "results": [ 00:28:51.544 { 00:28:51.544 "job": "nvme0n1", 00:28:51.544 "core_mask": "0x2", 00:28:51.544 "workload": "randread", 00:28:51.544 "status": "finished", 00:28:51.544 "queue_depth": 16, 00:28:51.544 "io_size": 131072, 00:28:51.544 "runtime": 2.002296, 00:28:51.544 "iops": 10628.798139735583, 00:28:51.544 "mibps": 1328.599767466948, 00:28:51.544 "io_failed": 0, 00:28:51.544 "io_timeout": 0, 00:28:51.544 "avg_latency_us": 1502.7037331656222, 00:28:51.544 "min_latency_us": 1348.5292307692307, 00:28:51.544 "max_latency_us": 7057.723076923077 00:28:51.544 } 00:28:51.544 ], 00:28:51.544 "core_count": 1 00:28:51.544 } 00:28:51.544 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:51.544 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:51.544 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:51.544 | .driver_specific 00:28:51.544 | .nvme_error 00:28:51.544 | .status_code 00:28:51.544 | .command_transient_transport_error' 00:28:51.544 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:51.803 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 687 > 0 )) 00:28:51.803 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79018 00:28:51.803 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79018 ']' 00:28:51.803 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79018 00:28:51.803 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:51.803 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.803 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79018 00:28:51.803 killing process with pid 79018 00:28:51.803 Received shutdown signal, test time was about 2.000000 seconds 00:28:51.803 00:28:51.803 Latency(us) 00:28:51.803 [2024-11-20T07:26:16.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.803 [2024-11-20T07:26:16.006Z] =================================================================================================================== 00:28:51.803 [2024-11-20T07:26:16.006Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:51.803 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:51.803 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:51.803 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79018' 00:28:51.803 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79018 00:28:51.803 07:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79018 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:52.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79073 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79073 /var/tmp/bperf.sock 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79073 ']' 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.062 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:52.062 [2024-11-20 07:26:16.080054] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:52.062 [2024-11-20 07:26:16.080272] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79073 ] 00:28:52.062 [2024-11-20 07:26:16.207390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.062 [2024-11-20 07:26:16.239660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.320 [2024-11-20 07:26:16.269795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:52.886 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.886 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:52.886 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:52.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:53.145 07:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:53.145 07:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.145 07:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:53.146 07:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.146 07:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.146 07:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.405 nvme0n1 00:28:53.405 07:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:53.405 07:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.405 07:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:53.405 07:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.405 07:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:53.405 07:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:53.405 Running I/O for 2 seconds... 00:28:53.405 [2024-11-20 07:26:17.501475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fb048 00:28:53.405 [2024-11-20 07:26:17.502596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.405 [2024-11-20 07:26:17.502627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.405 [2024-11-20 07:26:17.513362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fb8b8 00:28:53.405 [2024-11-20 07:26:17.514429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.405 [2024-11-20 07:26:17.514450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.405 [2024-11-20 07:26:17.525179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fc128 00:28:53.405 [2024-11-20 07:26:17.526236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.405 [2024-11-20 07:26:17.526258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:53.405 [2024-11-20 07:26:17.537207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fc998 00:28:53.405 [2024-11-20 07:26:17.538272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.405 [2024-11-20 07:26:17.538294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:53.405 [2024-11-20 07:26:17.549411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fd208 00:28:53.405 [2024-11-20 07:26:17.550465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.405 [2024-11-20 07:26:17.550488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.405 [2024-11-20 07:26:17.561587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fda78 00:28:53.405 [2024-11-20 07:26:17.562620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.405 [2024-11-20 07:26:17.562642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:53.405 [2024-11-20 07:26:17.573841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fe2e8 00:28:53.405 [2024-11-20 07:26:17.574872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.405 [2024-11-20 07:26:17.574897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:53.405 [2024-11-20 07:26:17.586031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166feb58 00:28:53.405 [2024-11-20 07:26:17.587041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.405 [2024-11-20 07:26:17.587063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:53.405 [2024-11-20 07:26:17.603360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fef90 00:28:53.405 [2024-11-20 07:26:17.605287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.405 [2024-11-20 07:26:17.605310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:53.664 [2024-11-20 07:26:17.615525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166feb58 00:28:53.664 [2024-11-20 07:26:17.617433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.664 [2024-11-20 07:26:17.617455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:53.664 [2024-11-20 07:26:17.627523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fe2e8 00:28:53.664 [2024-11-20 07:26:17.629361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.664 [2024-11-20 07:26:17.629383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:53.664 [2024-11-20 07:26:17.639327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fda78 00:28:53.664 [2024-11-20 07:26:17.641147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.664 [2024-11-20 07:26:17.641258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:53.664 [2024-11-20 07:26:17.651216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fd208 00:28:53.664 [2024-11-20 07:26:17.653026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.664 [2024-11-20 07:26:17.653112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:53.664 [2024-11-20 07:26:17.663105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fc998 00:28:53.664 [2024-11-20 07:26:17.664990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.664 [2024-11-20 07:26:17.665013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:53.664 [2024-11-20 07:26:17.675025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fc128 00:28:53.664 [2024-11-20 07:26:17.676829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.664 [2024-11-20 07:26:17.676850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:53.664 [2024-11-20 07:26:17.687102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fb8b8 00:28:53.664 [2024-11-20 07:26:17.688925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.664 [2024-11-20 07:26:17.688946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:53.664 [2024-11-20 07:26:17.699008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fb048 00:28:53.664 [2024-11-20 07:26:17.700761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.664 [2024-11-20 07:26:17.700780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.664 [2024-11-20 07:26:17.710808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fa7d8 00:28:53.664 [2024-11-20 07:26:17.712546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.664 [2024-11-20 07:26:17.712568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:53.664 [2024-11-20 07:26:17.722674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f9f68 00:28:53.664 [2024-11-20 07:26:17.724394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.664 [2024-11-20 07:26:17.724414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:53.664 [2024-11-20 07:26:17.734635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f96f8 00:28:53.664 [2024-11-20 07:26:17.736382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.664 [2024-11-20 07:26:17.736404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:53.664 [2024-11-20 07:26:17.746527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f8e88 00:28:53.664 [2024-11-20 07:26:17.748213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.665 [2024-11-20 07:26:17.748309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:53.665 [2024-11-20 07:26:17.758704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f8618 00:28:53.665 [2024-11-20 07:26:17.760430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.665 [2024-11-20 07:26:17.760452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:53.665 [2024-11-20 07:26:17.770836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f7da8 00:28:53.665 [2024-11-20 07:26:17.772546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.665 [2024-11-20 07:26:17.772567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:53.665 [2024-11-20 07:26:17.782978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f7538 00:28:53.665 [2024-11-20 07:26:17.784671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.665 [2024-11-20 07:26:17.784692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:53.665 [2024-11-20 07:26:17.794870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f6cc8 00:28:53.665 [2024-11-20 07:26:17.796504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.665 [2024-11-20 07:26:17.796525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:53.665 [2024-11-20 07:26:17.806653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f6458 00:28:53.665 [2024-11-20 07:26:17.808271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.665 [2024-11-20 07:26:17.808292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:53.665 [2024-11-20 07:26:17.818446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f5be8 00:28:53.665 [2024-11-20 07:26:17.820043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.665 [2024-11-20 07:26:17.820131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:53.665 [2024-11-20 07:26:17.830296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f5378 00:28:53.665 [2024-11-20 07:26:17.831895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.665 [2024-11-20 07:26:17.831917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:53.665 [2024-11-20 07:26:17.842096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f4b08 00:28:53.665 [2024-11-20 07:26:17.843765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.665 [2024-11-20 07:26:17.843783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:53.665 [2024-11-20 07:26:17.853955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f4298 00:28:53.665 [2024-11-20 07:26:17.855539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.665 [2024-11-20 07:26:17.855559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:17.865906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f3a28 00:28:53.924 [2024-11-20 07:26:17.867501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:17.867522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:17.877832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f31b8 00:28:53.924 [2024-11-20 07:26:17.879381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:17.879401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:17.889624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f2948 00:28:53.924 [2024-11-20 07:26:17.891150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:17.891241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:17.901475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f20d8 00:28:53.924 [2024-11-20 07:26:17.902992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:17.903075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:17.913337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f1868 00:28:53.924 [2024-11-20 07:26:17.914834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:17.914856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:17.925285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f0ff8 00:28:53.924 [2024-11-20 07:26:17.926770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:17.926792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:17.937326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f0788 00:28:53.924 [2024-11-20 07:26:17.938846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:17.938867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:17.949624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166eff18 00:28:53.924 [2024-11-20 07:26:17.951126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:17.951147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:17.961797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ef6a8 00:28:53.924 [2024-11-20 07:26:17.963290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:17.963311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:17.973980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166eee38 00:28:53.924 [2024-11-20 07:26:17.975463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:17.975485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:17.986143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ee5c8 00:28:53.924 [2024-11-20 07:26:17.987614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:17.987636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:17.998297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166edd58 00:28:53.924 [2024-11-20 07:26:17.999737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:17.999759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:18.010471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ed4e8 00:28:53.924 [2024-11-20 07:26:18.011879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.924 [2024-11-20 07:26:18.011901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:53.924 [2024-11-20 07:26:18.022620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ecc78 00:28:53.924 [2024-11-20 07:26:18.024014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.925 [2024-11-20 07:26:18.024035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:53.925 [2024-11-20 07:26:18.034784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ec408 00:28:53.925 [2024-11-20 07:26:18.036143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.925 [2024-11-20 07:26:18.036164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:53.925 [2024-11-20 07:26:18.046898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ebb98 00:28:53.925 [2024-11-20 07:26:18.048254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.925 [2024-11-20 07:26:18.048274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:53.925 [2024-11-20 07:26:18.058923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166eb328 00:28:53.925 [2024-11-20 07:26:18.060271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.925 [2024-11-20 07:26:18.060292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:53.925 [2024-11-20 07:26:18.070883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166eaab8 00:28:53.925 [2024-11-20 07:26:18.072185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.925 [2024-11-20 07:26:18.072206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:53.925 [2024-11-20 07:26:18.082690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ea248 00:28:53.925 [2024-11-20 07:26:18.083974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.925 [2024-11-20 07:26:18.083995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.925 [2024-11-20 07:26:18.094484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e99d8 00:28:53.925 [2024-11-20 07:26:18.095753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.925 [2024-11-20 07:26:18.095775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:53.925 [2024-11-20 07:26:18.106261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e9168 00:28:53.925 [2024-11-20 07:26:18.107531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.925 [2024-11-20 07:26:18.107552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:53.925 [2024-11-20 07:26:18.118060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e88f8 00:28:53.925 [2024-11-20 07:26:18.119322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.925 [2024-11-20 07:26:18.119342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:54.183 [2024-11-20 07:26:18.129843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e8088 00:28:54.183 [2024-11-20 07:26:18.131106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.183 [2024-11-20 07:26:18.131128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:54.183 [2024-11-20 07:26:18.141756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e7818 00:28:54.183 [2024-11-20 07:26:18.142983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.183 [2024-11-20 07:26:18.143004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:54.183 [2024-11-20 07:26:18.153546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e6fa8 00:28:54.183 [2024-11-20 07:26:18.154754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.183 [2024-11-20 07:26:18.154776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:54.183 [2024-11-20 07:26:18.165323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e6738 00:28:54.183 [2024-11-20 07:26:18.166521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.183 [2024-11-20 07:26:18.166541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:54.183 [2024-11-20 07:26:18.177106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e5ec8 00:28:54.183 [2024-11-20 07:26:18.178377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.183 [2024-11-20 07:26:18.178395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:54.183 [2024-11-20 07:26:18.189000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e5658 00:28:54.183 [2024-11-20 07:26:18.190160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.183 [2024-11-20 07:26:18.190255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.200888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e4de8 00:28:54.184 [2024-11-20 07:26:18.202049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.202138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.213107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e4578 00:28:54.184 [2024-11-20 07:26:18.214257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.214278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.224909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e3d08 00:28:54.184 [2024-11-20 07:26:18.226028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.226049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.236983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e3498 00:28:54.184 [2024-11-20 07:26:18.238080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.238102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.248755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e2c28 00:28:54.184 [2024-11-20 07:26:18.249836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.249857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.260535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e23b8 00:28:54.184 [2024-11-20 07:26:18.261602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.261623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.272320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e1b48 00:28:54.184 [2024-11-20 07:26:18.273371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.273393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.284085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e12d8 00:28:54.184 [2024-11-20 07:26:18.285194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.285213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.295953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e0a68 00:28:54.184 [2024-11-20 07:26:18.296984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.297067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.307822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e01f8 00:28:54.184 [2024-11-20 07:26:18.308835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.308856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.319620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166df988 00:28:54.184 [2024-11-20 07:26:18.320618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.320640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.331446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166df118 00:28:54.184 [2024-11-20 07:26:18.332459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.332480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.343478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166de8a8 00:28:54.184 [2024-11-20 07:26:18.344446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.344466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.355264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166de038 00:28:54.184 [2024-11-20 07:26:18.356211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.356306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:54.184 [2024-11-20 07:26:18.372077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166de038 00:28:54.184 [2024-11-20 07:26:18.373942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.184 [2024-11-20 07:26:18.373963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:54.443 [2024-11-20 07:26:18.384001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166de8a8 00:28:54.443 [2024-11-20 07:26:18.385901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.443 [2024-11-20 07:26:18.385922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:54.443 [2024-11-20 07:26:18.396157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166df118 00:28:54.443 [2024-11-20 07:26:18.398045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.443 [2024-11-20 07:26:18.398066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:54.443 [2024-11-20 07:26:18.408195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166df988 00:28:54.443 [2024-11-20 07:26:18.410018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.443 [2024-11-20 07:26:18.410039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:54.443 [2024-11-20 07:26:18.420032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e01f8 00:28:54.443 [2024-11-20 07:26:18.421837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.443 [2024-11-20 07:26:18.421857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:54.443 [2024-11-20 07:26:18.431844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e0a68 00:28:54.443 [2024-11-20 07:26:18.433631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.443 [2024-11-20 07:26:18.433652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:54.443 [2024-11-20 07:26:18.443631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e12d8 00:28:54.443 [2024-11-20 07:26:18.445403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.443 [2024-11-20 07:26:18.445423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:54.443 [2024-11-20 07:26:18.455421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e1b48 00:28:54.443 [2024-11-20 07:26:18.457170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.443 [2024-11-20 07:26:18.457190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.443 [2024-11-20 07:26:18.467212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e23b8 00:28:54.443 [2024-11-20 07:26:18.469074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.443 [2024-11-20 07:26:18.469093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:54.443 [2024-11-20 07:26:18.479147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e2c28 00:28:54.443 [2024-11-20 07:26:18.480971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.443 [2024-11-20 07:26:18.480987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:54.443 21128.00 IOPS, 82.53 MiB/s [2024-11-20T07:26:18.646Z] [2024-11-20 07:26:18.491354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e3498 00:28:54.443 [2024-11-20 07:26:18.493123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.443 [2024-11-20 07:26:18.493142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:54.443 [2024-11-20 07:26:18.503238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e3d08 00:28:54.443 [2024-11-20 07:26:18.504931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.443 [2024-11-20 07:26:18.504951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:54.443 [2024-11-20 07:26:18.515240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e4578 00:28:54.443 [2024-11-20 07:26:18.516964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.444 [2024-11-20 07:26:18.516983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:54.444 [2024-11-20 07:26:18.527407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e4de8 00:28:54.444 [2024-11-20 07:26:18.529125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.444 [2024-11-20 07:26:18.529144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:54.444 [2024-11-20 07:26:18.539550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e5658 00:28:54.444 [2024-11-20 07:26:18.541259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.444 [2024-11-20 07:26:18.541278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:54.444 [2024-11-20 07:26:18.551719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e5ec8 00:28:54.444 [2024-11-20 07:26:18.553408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.444 [2024-11-20 07:26:18.553493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:54.444 [2024-11-20 07:26:18.563979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e6738 00:28:54.444 [2024-11-20 07:26:18.565653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.444 [2024-11-20 07:26:18.565675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:54.444 [2024-11-20 07:26:18.576117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e6fa8 00:28:54.444 [2024-11-20 07:26:18.577786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.444 [2024-11-20 07:26:18.577806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.444 [2024-11-20 07:26:18.588326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e7818 00:28:54.444 [2024-11-20 07:26:18.589968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.444 [2024-11-20 07:26:18.589989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:54.444 [2024-11-20 07:26:18.600522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e8088 00:28:54.444 [2024-11-20 07:26:18.602162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.444 [2024-11-20 07:26:18.602186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:54.444 [2024-11-20 07:26:18.612785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e88f8 00:28:54.444 [2024-11-20 07:26:18.614424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.444 [2024-11-20 07:26:18.614448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:54.444 [2024-11-20 07:26:18.624994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e9168 00:28:54.444 [2024-11-20 07:26:18.626608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.444 [2024-11-20 07:26:18.626631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:54.444 [2024-11-20 07:26:18.637193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166e99d8 00:28:54.444 [2024-11-20 07:26:18.638813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.444 [2024-11-20 07:26:18.638836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.649435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ea248 00:28:54.703 [2024-11-20 07:26:18.651031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.651055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.661671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166eaab8 00:28:54.703 [2024-11-20 07:26:18.663247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.663270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.673842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166eb328 00:28:54.703 [2024-11-20 07:26:18.675397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.675419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.686002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ebb98 00:28:54.703 [2024-11-20 07:26:18.687548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.687569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.698143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ec408 00:28:54.703 [2024-11-20 07:26:18.699674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.699696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.710325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ecc78 00:28:54.703 [2024-11-20 07:26:18.711893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.711915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.722572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ed4e8 00:28:54.703 [2024-11-20 07:26:18.724052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.724074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.734712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166edd58 00:28:54.703 [2024-11-20 07:26:18.736173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.736195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.746877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ee5c8 00:28:54.703 [2024-11-20 07:26:18.748333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.748354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.758916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166eee38 00:28:54.703 [2024-11-20 07:26:18.760317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.760338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.770753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ef6a8 00:28:54.703 [2024-11-20 07:26:18.772160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.772264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.782887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166eff18 00:28:54.703 [2024-11-20 07:26:18.784297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.784318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.795015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f0788 00:28:54.703 [2024-11-20 07:26:18.796375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.703 [2024-11-20 07:26:18.796396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:54.703 [2024-11-20 07:26:18.806800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f0ff8 00:28:54.704 [2024-11-20 07:26:18.808138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.704 [2024-11-20 07:26:18.808236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:54.704 [2024-11-20 07:26:18.818788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f1868 00:28:54.704 [2024-11-20 07:26:18.820115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.704 [2024-11-20 07:26:18.820136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:54.704 [2024-11-20 07:26:18.830829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f20d8 00:28:54.704 [2024-11-20 07:26:18.832174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.704 [2024-11-20 07:26:18.832195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:54.704 [2024-11-20 07:26:18.842818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f2948 00:28:54.704 [2024-11-20 07:26:18.844111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.704 [2024-11-20 07:26:18.844133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.704 [2024-11-20 07:26:18.854670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f31b8 00:28:54.704 [2024-11-20 07:26:18.855957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.704 [2024-11-20 07:26:18.855981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:54.704 [2024-11-20 07:26:18.866520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f3a28 00:28:54.704 [2024-11-20 07:26:18.867785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.704 [2024-11-20 07:26:18.867807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:54.704 [2024-11-20 07:26:18.878505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f4298 00:28:54.704 [2024-11-20 07:26:18.879759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.704 [2024-11-20 07:26:18.879781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:54.704 [2024-11-20 07:26:18.890422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f4b08 00:28:54.704 [2024-11-20 07:26:18.891666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.704 [2024-11-20 07:26:18.891689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:54.704 [2024-11-20 07:26:18.902380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f5378 00:28:54.963 [2024-11-20 07:26:18.903641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:18.903664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:18.914583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f5be8 00:28:54.963 [2024-11-20 07:26:18.915822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:18.915845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:18.926568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f6458 00:28:54.963 [2024-11-20 07:26:18.927781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:18.927805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:18.938720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f6cc8 00:28:54.963 [2024-11-20 07:26:18.939905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:18.939929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:18.950639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f7538 00:28:54.963 [2024-11-20 07:26:18.951808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:18.951832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:18.962479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f7da8 00:28:54.963 [2024-11-20 07:26:18.963629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:18.963651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:18.974313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f8618 00:28:54.963 [2024-11-20 07:26:18.975451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:18.975474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:18.986181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f8e88 00:28:54.963 [2024-11-20 07:26:18.987401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:18.987420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:18.998087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f96f8 00:28:54.963 [2024-11-20 07:26:18.999302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:18.999321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:19.010018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f9f68 00:28:54.963 [2024-11-20 07:26:19.011142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:19.011242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:19.022139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fa7d8 00:28:54.963 [2024-11-20 07:26:19.023235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:19.023257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:19.034036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fb048 00:28:54.963 [2024-11-20 07:26:19.035131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:19.035215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:19.046276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fb8b8 00:28:54.963 [2024-11-20 07:26:19.047440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:19.047530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:19.058628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fc128 00:28:54.963 [2024-11-20 07:26:19.059764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:19.059853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:19.070988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fc998 00:28:54.963 [2024-11-20 07:26:19.072106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:19.072195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:19.083353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fd208 00:28:54.963 [2024-11-20 07:26:19.084459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.963 [2024-11-20 07:26:19.084545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.963 [2024-11-20 07:26:19.095672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fda78 00:28:54.964 [2024-11-20 07:26:19.096735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.964 [2024-11-20 07:26:19.096822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:54.964 [2024-11-20 07:26:19.107680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fe2e8 00:28:54.964 [2024-11-20 07:26:19.108727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.964 [2024-11-20 07:26:19.108813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:54.964 [2024-11-20 07:26:19.119668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166feb58 00:28:54.964 [2024-11-20 07:26:19.120700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.964 [2024-11-20 07:26:19.120786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:54.964 [2024-11-20 07:26:19.136593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fef90 00:28:54.964 [2024-11-20 07:26:19.138545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.964 [2024-11-20 07:26:19.138634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:54.964 [2024-11-20 07:26:19.148593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166feb58 00:28:54.964 [2024-11-20 07:26:19.150526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.964 [2024-11-20 07:26:19.150610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:54.964 [2024-11-20 07:26:19.160570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fe2e8 00:28:54.964 [2024-11-20 07:26:19.162488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.964 [2024-11-20 07:26:19.162575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.172571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fda78 00:28:55.222 [2024-11-20 07:26:19.174474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.174559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.184536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fd208 00:28:55.222 [2024-11-20 07:26:19.186415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.186504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.196494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fc998 00:28:55.222 [2024-11-20 07:26:19.198367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.198452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.208497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fc128 00:28:55.222 [2024-11-20 07:26:19.210370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.210456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.220504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fb8b8 00:28:55.222 [2024-11-20 07:26:19.222359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.222448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.232511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fb048 00:28:55.222 [2024-11-20 07:26:19.234361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.234447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.244611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166fa7d8 00:28:55.222 [2024-11-20 07:26:19.246430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.246517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.256600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f9f68 00:28:55.222 [2024-11-20 07:26:19.258407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.258494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.268635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f96f8 00:28:55.222 [2024-11-20 07:26:19.270425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.270511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.280619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f8e88 00:28:55.222 [2024-11-20 07:26:19.282399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.282487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.292634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f8618 00:28:55.222 [2024-11-20 07:26:19.294404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.294489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.304628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f7da8 00:28:55.222 [2024-11-20 07:26:19.306383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.306474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.316646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f7538 00:28:55.222 [2024-11-20 07:26:19.318387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.318478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.328677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f6cc8 00:28:55.222 [2024-11-20 07:26:19.330407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.330494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.340662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f6458 00:28:55.222 [2024-11-20 07:26:19.342305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.342326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.352764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f5be8 00:28:55.222 [2024-11-20 07:26:19.354390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.354411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.364733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f5378 00:28:55.222 [2024-11-20 07:26:19.366339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.366360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.376525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f4b08 00:28:55.222 [2024-11-20 07:26:19.378101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.378183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.388397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f4298 00:28:55.222 [2024-11-20 07:26:19.389959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.389979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.400239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f3a28 00:28:55.222 [2024-11-20 07:26:19.401786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.401806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:55.222 [2024-11-20 07:26:19.412255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f31b8 00:28:55.222 [2024-11-20 07:26:19.413825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.222 [2024-11-20 07:26:19.413846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:55.480 [2024-11-20 07:26:19.424342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f2948 00:28:55.480 [2024-11-20 07:26:19.425864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.480 [2024-11-20 07:26:19.425885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.480 [2024-11-20 07:26:19.436311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f20d8 00:28:55.480 [2024-11-20 07:26:19.437812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.480 [2024-11-20 07:26:19.437833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:55.480 [2024-11-20 07:26:19.448099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f1868 00:28:55.480 [2024-11-20 07:26:19.449654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.480 [2024-11-20 07:26:19.449675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:55.480 [2024-11-20 07:26:19.459976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f0ff8 00:28:55.480 [2024-11-20 07:26:19.461455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.480 [2024-11-20 07:26:19.461474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:55.480 [2024-11-20 07:26:19.471983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166f0788 00:28:55.480 [2024-11-20 07:26:19.473494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.480 [2024-11-20 07:26:19.473514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:55.481 [2024-11-20 07:26:19.484146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166eff18 00:28:55.481 [2024-11-20 07:26:19.485639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.481 [2024-11-20 07:26:19.485660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:55.481 21063.50 IOPS, 82.28 MiB/s [2024-11-20T07:26:19.684Z] [2024-11-20 07:26:19.496639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc275b0) with pdu=0x2000166ef6a8 00:28:55.481 [2024-11-20 07:26:19.498100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.481 [2024-11-20 07:26:19.498122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:55.481 00:28:55.481 Latency(us) 00:28:55.481 [2024-11-20T07:26:19.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.481 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:55.481 nvme0n1 : 2.01 21104.17 82.44 0.00 0.00 6060.64 1638.40 23189.66 00:28:55.481 [2024-11-20T07:26:19.684Z] =================================================================================================================== 00:28:55.481 [2024-11-20T07:26:19.684Z] Total : 21104.17 82.44 0.00 0.00 6060.64 1638.40 23189.66 00:28:55.481 { 00:28:55.481 "results": [ 00:28:55.481 { 00:28:55.481 "job": "nvme0n1", 00:28:55.481 "core_mask": "0x2", 00:28:55.481 "workload": "randwrite", 00:28:55.481 "status": "finished", 00:28:55.481 "queue_depth": 128, 00:28:55.481 "io_size": 4096, 00:28:55.481 "runtime": 2.008229, 00:28:55.481 "iops": 21104.166905268274, 00:28:55.481 "mibps": 82.4381519737042, 00:28:55.481 "io_failed": 0, 00:28:55.481 "io_timeout": 0, 00:28:55.481 "avg_latency_us": 6060.641776661355, 00:28:55.481 "min_latency_us": 1638.4, 00:28:55.481 "max_latency_us": 23189.66153846154 00:28:55.481 } 00:28:55.481 ], 00:28:55.481 "core_count": 1 00:28:55.481 } 00:28:55.481 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:55.481 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:55.481 | .driver_specific 00:28:55.481 | .nvme_error 00:28:55.481 | .status_code 00:28:55.481 | .command_transient_transport_error' 00:28:55.481 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:55.481 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 166 > 0 )) 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79073 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79073 ']' 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79073 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79073 00:28:55.739 killing process with pid 79073 00:28:55.739 Received shutdown signal, test time was about 2.000000 seconds 00:28:55.739 00:28:55.739 Latency(us) 00:28:55.739 [2024-11-20T07:26:19.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.739 [2024-11-20T07:26:19.942Z] =================================================================================================================== 00:28:55.739 [2024-11-20T07:26:19.942Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79073' 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79073 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79073 00:28:55.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79133 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79133 /var/tmp/bperf.sock 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79133 ']' 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.739 07:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.739 [2024-11-20 07:26:19.873120] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:55.739 [2024-11-20 07:26:19.873304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79133 ] 00:28:55.739 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:55.739 Zero copy mechanism will not be used. 00:28:55.997 [2024-11-20 07:26:20.005379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.997 [2024-11-20 07:26:20.041116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.997 [2024-11-20 07:26:20.072541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:56.562 07:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.562 07:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:56.562 07:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.562 07:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.820 07:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:56.820 07:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.820 07:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.820 07:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.820 07:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.820 07:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.079 nvme0n1 00:28:57.079 07:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:57.079 07:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.079 07:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.079 07:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.079 07:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:57.079 07:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:57.338 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:57.339 Zero copy mechanism will not be used. 00:28:57.339 Running I/O for 2 seconds... 00:28:57.339 [2024-11-20 07:26:21.335095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.335274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.335298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.338465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.338526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.338542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.341496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.341559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.341573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.344489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.344543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.344556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.347483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.347543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.347556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.350447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.350507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.350520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.353385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.353445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.353458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.356369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.356429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.356441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.359332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.359388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.359401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.362311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.362367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.362380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.365251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.365313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.365326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.368192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.368259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.368272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.371149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.371198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.371211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.374113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.374168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.374181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.377092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.377232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.377244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.380132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.380176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.380188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.383060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.383104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.383117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.386030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.386089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.386102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.388988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.389107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.389119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.392052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.392096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.392109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.395062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.395106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.395119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.398029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.398084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.398096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.400982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.401087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.401100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.404028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.404072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.404085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.406929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.406985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.339 [2024-11-20 07:26:21.406997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.339 [2024-11-20 07:26:21.409829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.339 [2024-11-20 07:26:21.409880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.409893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.412755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.412869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.412882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.415815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.415877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.415890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.418768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.418827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.418839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.421668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.421711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.421724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.424546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.424601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.424613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.427468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.427529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.427542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.430433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.430487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.430500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.433409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.433452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.433464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.436401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.436458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.436471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.439363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.439423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.439436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.442274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.442338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.442350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.445160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.445214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.445238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.448069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.448130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.448143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.451026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.451143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.451155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.454053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.454114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.454126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.457047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.457111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.457124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.460016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.460061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.460074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.463005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.463120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.463132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.466030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.466090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.466102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.469014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.469070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.469083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.471982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.472043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.472055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.474981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.475088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.475101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.478046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.478091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.478104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.481063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.481123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.481136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.484003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.484060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.484073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.486944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.487055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.340 [2024-11-20 07:26:21.487068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.340 [2024-11-20 07:26:21.489950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.340 [2024-11-20 07:26:21.490013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.490026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.492932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.492989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.493001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.495921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.495977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.495989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.498901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.499049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.499062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.501896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.501942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.501955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.504860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.504916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.504928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.507873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.507928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.507941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.510849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.510960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.510973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.513895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.513951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.513963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.516903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.516948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.516961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.519889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.519933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.519946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.522859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.522985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.522997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.525950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.525999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.526012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.528941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.528987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.528999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.341 [2024-11-20 07:26:21.531933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.341 [2024-11-20 07:26:21.531987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.341 [2024-11-20 07:26:21.532000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.534882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.534998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.535011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.537931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.537987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.537999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.540887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.540942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.540954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.543861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.543924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.543937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.546841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.546957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.546970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.549865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.549920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.549933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.552837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.552892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.552905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.555810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.555870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.555882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.558739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.558800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.558813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.561617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.561718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.561730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.564576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.564629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.564642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.567491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.567547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.567559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.570441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.570496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.570509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.573388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.573443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.573455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.576359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.576414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.576426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.579268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.579315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.579327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.582157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.582200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.582213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.585141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.585209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.585231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.588107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.588234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.588246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.591121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.601 [2024-11-20 07:26:21.591176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.601 [2024-11-20 07:26:21.591188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.601 [2024-11-20 07:26:21.594062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.594117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.594130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.597040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.597085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.597097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.599991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.600107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.600119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.603037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.603082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.603095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.606001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.606061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.606073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.608971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.609019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.609032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.611926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.612041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.612054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.614969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.615016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.615029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.617922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.617986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.617998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.620887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.620936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.620948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.623839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.623941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.623954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.626819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.626874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.626886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.629668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.629726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.629738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.632531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.632584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.632596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.635428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.635466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.635478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.638331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.638387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.638399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.641174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.641217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.641240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.644097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.644144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.644157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.646987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.647101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.647113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.649940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.649995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.650007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.652819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.652872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.652884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.655707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.655760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.655772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.658601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.658699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.658711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.661554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.661606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.661618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.664421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.664475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.664487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.667334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.667388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.667401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.602 [2024-11-20 07:26:21.670214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.602 [2024-11-20 07:26:21.670295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.602 [2024-11-20 07:26:21.670307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.673105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.673160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.673173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.676020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.676063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.676075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.678933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.678987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.679000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.681789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.681916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.681928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.684770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.684829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.684841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.687691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.687752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.687765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.690667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.690724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.690736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.693631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.693686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.693698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.696622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.696741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.696753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.699676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.699721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.699734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.702664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.702726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.702738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.705635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.705690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.705702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.708644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.708688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.708700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.711650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.711770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.711783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.714710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.714769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.714782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.717722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.717766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.717778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.720701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.720743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.720756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.723664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.723721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.723734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.726542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.726643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.726655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.729534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.729580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.729592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.732472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.732514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.732527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.735414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.735476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.735489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.738388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.738427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.738439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.741352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.741405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.741418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.744320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.744368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.744380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.747231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.747287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.747300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.603 [2024-11-20 07:26:21.750120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.603 [2024-11-20 07:26:21.750164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.603 [2024-11-20 07:26:21.750176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.753002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.753118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.753130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.756043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.756093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.756105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.759044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.759092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.759104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.762009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.762056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.762069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.764969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.765071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.765084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.767995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.768050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.768063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.770967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.771034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.771046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.773918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.773973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.773985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.776896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.777006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.777018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.779923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.779984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.779996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.782891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.782947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.782959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.785822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.785868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.785880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.788783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.788881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.788894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.791823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.791871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.791883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.794878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.794994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.795106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.604 [2024-11-20 07:26:21.797932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.604 [2024-11-20 07:26:21.798049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.604 [2024-11-20 07:26:21.798136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.800992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.801042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.801055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.803950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.804050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.804063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.807022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.807070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.807083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.809971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.810414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.810557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.813387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.813517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.813643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.816470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.816578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.816662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.819581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.819696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.819854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.822605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.822660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.822678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.825592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.825649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.825667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.828606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.828654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.828666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.831600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.831697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.831710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.834612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.834670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.834683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.837680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.837780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.837868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.840769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.840876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.840955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.843808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.843901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.843914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.846870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.846931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.846950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.849869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.850019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.850186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.852940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.853045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.853134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.855998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.856115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.856236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.859055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.859171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.864 [2024-11-20 07:26:21.859272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.864 [2024-11-20 07:26:21.862063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.864 [2024-11-20 07:26:21.862179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.862422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.865137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.865264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.865359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.868206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.868318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.868412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.871192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.871308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.871435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.874139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.874275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.874401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.877126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.877250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.877870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.883943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.884513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.884864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.890965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.891090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.891194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.893929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.894035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.894051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.897030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.897142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.897253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.900023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.900143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.900254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.903028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.903147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.903368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.906186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.906325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.906447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.909258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.909360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.909510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.912344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.912444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.912460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.915380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.915425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.915440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.918373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.918431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.918445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.921299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.921349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.921363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.924286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.924332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.924347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.927277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.927322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.927337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.930256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.930299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.930313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.933170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.933232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.933246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.865 [2024-11-20 07:26:21.936180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.865 [2024-11-20 07:26:21.936232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.865 [2024-11-20 07:26:21.936247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.939217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.939267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.939282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.942329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.942376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.942390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.945284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.945333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.945347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.948280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.948320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.948334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.951248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.951284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.951299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.954202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.954330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.954343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.957261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.957308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.957322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.960255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.960303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.960317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.963237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.963280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.963295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.966182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.966300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.966315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.969189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.969261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.969275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.972158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.972217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.972243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.975132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.975171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.975185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.978105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.978210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.978242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.981157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.981204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.981231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.984157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.984201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.984216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.987164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.987207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.987233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.990154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.990277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.990291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.993171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.993231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.993246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.996135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.996188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.996201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:21.999119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:21.999158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.866 [2024-11-20 07:26:21.999172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.866 [2024-11-20 07:26:22.002081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.866 [2024-11-20 07:26:22.002183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.002197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.005132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.005211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.005237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.008119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.008172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.008186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.011105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.011161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.011175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.014086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.014188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.014203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.017206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.017258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.017273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.020272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.020314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.020329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.023318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.023369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.023383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.026369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.026418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.026433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.029395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.029434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.029448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.032373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.032432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.032446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.035338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.035389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.035403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.038368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.038408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.038422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.041400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.041456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.041470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.044467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.044512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.044526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.047541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.047583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.047598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.050622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.050683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.050697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.053676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.053733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.053747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.056710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.056751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.056765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.867 [2024-11-20 07:26:22.059773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:57.867 [2024-11-20 07:26:22.059879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.867 [2024-11-20 07:26:22.059894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.127 [2024-11-20 07:26:22.062891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.127 [2024-11-20 07:26:22.062930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.127 [2024-11-20 07:26:22.062944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.127 [2024-11-20 07:26:22.065914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.127 [2024-11-20 07:26:22.065959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.065973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.068989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.069030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.069044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.072049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.072107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.072121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.075132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.075260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.075274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.078267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.078309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.078323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.081332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.081382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.081396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.084380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.084435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.084449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.087425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.087467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.087481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.090514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.090576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.090590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.093562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.093666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.093680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.096674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.096731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.096746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.099722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.099765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.099779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.102782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.102839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.102853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.105799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.105841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.105855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.108769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.108872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.108887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.111931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.111991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.112004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.114994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.115034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.115049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.118012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.118054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.118068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.121041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.121089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.121103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.124090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.124201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.124215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.127169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.127232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.127246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.130147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.130269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.130283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.133125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.133177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.128 [2024-11-20 07:26:22.133191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.128 [2024-11-20 07:26:22.136125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.128 [2024-11-20 07:26:22.136237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.136252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.139196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.139253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.139267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.142194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.142256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.142270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.145117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.145166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.145180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.148136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.148252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.148267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.151173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.151235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.151249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.154107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.154156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.154170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.157105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.157162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.157177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.160148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.160261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.160275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.163246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.163298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.163312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.166173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.166249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.166263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.169140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.169193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.169208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.172105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.172240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.172255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.175167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.175233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.175248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.178174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.178217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.178257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.181161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.181218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.181244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.184133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.184260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.184274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.187175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.187232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.187247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.190136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.190186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.190200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.193152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.193201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.193215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.196214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.196275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.196289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.129 [2024-11-20 07:26:22.199203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.129 [2024-11-20 07:26:22.199263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.129 [2024-11-20 07:26:22.199277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.202164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.202213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.202255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.205115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.205160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.205175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.208094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.208198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.208212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.211162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.211253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.211267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.214118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.214168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.214182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.217088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.217129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.217144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.220094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.220209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.220235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.223144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.223187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.223201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.226105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.226150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.226164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.229080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.229124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.229138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.232067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.232166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.232181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.235137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.235212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.235238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.238059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.238118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.238132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.241056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.241155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.241175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.244025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.244126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.244140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.247075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.247135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.247149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.250045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.250107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.250121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.253040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.253097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.253112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.256014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.256127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.256141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.259084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.259157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.259172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.262077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.262136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.262150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.265050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.265121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.265135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.268028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.130 [2024-11-20 07:26:22.268143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.130 [2024-11-20 07:26:22.268158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.130 [2024-11-20 07:26:22.271115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.271166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.271180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.274174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.274249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.274263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.277183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.277237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.277251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.280201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.280315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.280329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.283245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.283345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.283365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.286195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.286319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.286333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.289179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.289269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.289283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.292162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.292286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.292300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.295181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.295287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.295309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.298218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.298321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.298341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.301308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.301378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.301392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.304349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.304415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.304430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.307423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.307499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.307513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.310484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.310554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.310568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.313543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.313620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.313635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.316607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.316673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.316687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.319622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.319688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.319702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.322600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.131 [2024-11-20 07:26:22.322699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.131 [2024-11-20 07:26:22.322713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.131 [2024-11-20 07:26:22.325681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.391 10168.00 IOPS, 1271.00 MiB/s [2024-11-20T07:26:22.595Z] [2024-11-20 07:26:22.327381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.327409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.330258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.330336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.330350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.333312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.333394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.333408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.336367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.336438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.336452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.339460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.339530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.339544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.342543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.342623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.342637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.345584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.345652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.345666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.348632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.348748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.348762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.351758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.351841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.351861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.354849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.354924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.354943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.357884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.357975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.357994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.360899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.360970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.360989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.363894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.364010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.364024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.366955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.367048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.367068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.369921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.370013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.370033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.372919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.372983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.372997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.375951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.376075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.376089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.379115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.379174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.379188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.382141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.382257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.382272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.385177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.385286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.385305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.388230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.388323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.388337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.391293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.391368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.391382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.394364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.394431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.394445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.397374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.397445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-11-20 07:26:22.397459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.392 [2024-11-20 07:26:22.400450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.392 [2024-11-20 07:26:22.400525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.400539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.403501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.403575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.403589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.406557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.406670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.406685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.409626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.409716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.409736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.412627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.412697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.412711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.415703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.415774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.415787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.418683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.418777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.418792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.421658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.421769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.421783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.424695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.424770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.424785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.427705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.427780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.427794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.430683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.430774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.430789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.433615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.433706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.433726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.436651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.436767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.436781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.439792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.439863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.439878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.442823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.442897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.442912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.445870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.445951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.445965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.448922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.448986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.449000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.451988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.452105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.452119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.455110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.455173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.455187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.458123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.458215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.458252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.461166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.461274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.461296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.464203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.464316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.464330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.467339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.393 [2024-11-20 07:26:22.467410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-11-20 07:26:22.467424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.393 [2024-11-20 07:26:22.470404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.470474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.470487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.473469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.473527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.473542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.476507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.476576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.476590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.479569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.479682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.479697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.482743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.482816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.482831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.485875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.485953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.485968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.488928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.488999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.489013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.491992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.492086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.492100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.495043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.495152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.495166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.498155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.498265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.498280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.501230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.501311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.501324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.504268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.504369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.504389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.507337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.507408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.507422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.510414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.510488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.510502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.513452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.513527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.513541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.516495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.516570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.516584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.519540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.519632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.519652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.522574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.522643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.522657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.525646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.525722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.525736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.528702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.528805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.528819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.531846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.531936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.531956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.534887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.534961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.534975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.537921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.394 [2024-11-20 07:26:22.538014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-11-20 07:26:22.538033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.394 [2024-11-20 07:26:22.540973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.541065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.541079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.544010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.544114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.544128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.547177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.547255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.547269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.550270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.550375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.550389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.553362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.553439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.553453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.556430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.556506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.556520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.559529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.559660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.559674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.562716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.562806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.562826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.565735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.565826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.565846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.568789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.568884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.568898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.571865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.571959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.571979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.574910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.575050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.575064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.578055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.578159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.578179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.581137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.581242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.581257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.584211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.584298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.584312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.395 [2024-11-20 07:26:22.587280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.395 [2024-11-20 07:26:22.587371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-11-20 07:26:22.587385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.655 [2024-11-20 07:26:22.590344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.655 [2024-11-20 07:26:22.590416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.655 [2024-11-20 07:26:22.590430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.655 [2024-11-20 07:26:22.593435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.655 [2024-11-20 07:26:22.593500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.655 [2024-11-20 07:26:22.593514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.655 [2024-11-20 07:26:22.596553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.655 [2024-11-20 07:26:22.596617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.655 [2024-11-20 07:26:22.596632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.655 [2024-11-20 07:26:22.599616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.655 [2024-11-20 07:26:22.599707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.655 [2024-11-20 07:26:22.599726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.655 [2024-11-20 07:26:22.602682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.655 [2024-11-20 07:26:22.602758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.655 [2024-11-20 07:26:22.602773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.655 [2024-11-20 07:26:22.605730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.655 [2024-11-20 07:26:22.605861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.655 [2024-11-20 07:26:22.605875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.655 [2024-11-20 07:26:22.608896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.655 [2024-11-20 07:26:22.608959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.655 [2024-11-20 07:26:22.608974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.655 [2024-11-20 07:26:22.612000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.655 [2024-11-20 07:26:22.612094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.655 [2024-11-20 07:26:22.612108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.655 [2024-11-20 07:26:22.615039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.655 [2024-11-20 07:26:22.615132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.615152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.618098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.618191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.618211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.621147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.621295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.621309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.624295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.624377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.624397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.627339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.627395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.627409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.630399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.630484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.630498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.633462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.633532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.633546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.636501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.636603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.636617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.639608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.639678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.639692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.642726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.642789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.642803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.645775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.645851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.645864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.648811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.648903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.648917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.651872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.651996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.652010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.655026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.655101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.655115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.658067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.658159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.658173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.661100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.661191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.661211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.664080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.664181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.664195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.667195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.667301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.667327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.670267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.670331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.670345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.673316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.673407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.673426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.676352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.676423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.676437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.679312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.679403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.679423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.682309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.682416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.682436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.685367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.685449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.685468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.688426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.688496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.688510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.691491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.691566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.691580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.694505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.656 [2024-11-20 07:26:22.694602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-11-20 07:26:22.694617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.656 [2024-11-20 07:26:22.697592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.697655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.697669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.700649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.700715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.700729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.703341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.703493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.703513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.706294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.706519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.706576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.709361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.709575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.709602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.712233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.712273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.712287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.715268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.715314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.715327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.718316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.718359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.718373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.721370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.721411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.721425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.724421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.724464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.724479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.727452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.727495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.727509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.730516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.730557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.730571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.733589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.733630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.733644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.736636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.736745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.736759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.739782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.739826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.739840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.742863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.742905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.742920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.745922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.745967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.745981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.748958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.748998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.749013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.752046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.752148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.752162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.755196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.755264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.755278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.758213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.758275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.758289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.761297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.761336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.761350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.764273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.764313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.764327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.767235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.767272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.767286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.770162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.770203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.770239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.773162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.657 [2024-11-20 07:26:22.773204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-11-20 07:26:22.773230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.657 [2024-11-20 07:26:22.776237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.776287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.776301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.779317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.779363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.779377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.782370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.782414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.782427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.785443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.785490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.785504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.788501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.788549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.788563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.791578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.791702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.791716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.794729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.794773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.794788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.797806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.797848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.797863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.800831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.800877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.800891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.803827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.803869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.803883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.806816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.806930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.806944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.809949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.809994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.810008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.813024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.813067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.813081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.816114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.816160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.816174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.819148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.819195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.819209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.822211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.822282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.822296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.825273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.825320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.825333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.828370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.828417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.828430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.831479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.831527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.831541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.834561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.834607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.834621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.837600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.837735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.837753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.840753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.840803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.840818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.843842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.843893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.843907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.846924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.846969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.846983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.849957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.850004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.850018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.658 [2024-11-20 07:26:22.853041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.658 [2024-11-20 07:26:22.853144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-11-20 07:26:22.853159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.856182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.856234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.856248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.859257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.859296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.859310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.862330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.862377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.862390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.865396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.865436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.865450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.868495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.868539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.868553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.871588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.871690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.871704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.874726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.874772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.874786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.877803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.877912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.878094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.881189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.881241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.881267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.884263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.884299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.884315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.887334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.887371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.887386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.890376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.890414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.890429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.893537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.893645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.893747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.896677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.896781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.896870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.899792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.899896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.899987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.902881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.902989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.903173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.906355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.906466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.906614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.909431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.909537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.909669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.912545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.912653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.912775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.915680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.915784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.915875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.918765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.918873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.919012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.921902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.922013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.922125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.925019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.919 [2024-11-20 07:26:22.925132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.919 [2024-11-20 07:26:22.925151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.919 [2024-11-20 07:26:22.928141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.928184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.928198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.931196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.931244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.931259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.934250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.934298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.934312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.937290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.937336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.937350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.940388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.940494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.940587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.943510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.943617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.943707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.946626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.946727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.946821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.949690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.949798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.949885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.952813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.952920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.953007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.955965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.956064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.956079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.959093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.959139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.959159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.962150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.962191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.962206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.965197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.965257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.965298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.968307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.968348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.968362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.971355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.971396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.971410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.974412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.974459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.974473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.977479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.977518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.977532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.980542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.980586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.980601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.983569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.983677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.983691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.986694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.986740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.986754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.989752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.989798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.989812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.992794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.992842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.992855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.995919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.995963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.995978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:22.998937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:22.999046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:22.999060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:23.002058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:23.002101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:23.002116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:23.005135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:23.005177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:23.005191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:23.008238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:23.008279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.920 [2024-11-20 07:26:23.008293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.920 [2024-11-20 07:26:23.011277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.920 [2024-11-20 07:26:23.011326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.011340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.014340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.014381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.014395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.017413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.017455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.017469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.020471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.020521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.020536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.023550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.023595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.023609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.026604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.026651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.026665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.029635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.029737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.029750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.032754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.032796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.032810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.035751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.035791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.035805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.038713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.038753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.038767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.041642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.041690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.041704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.044649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.044752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.044766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.047703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.047746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.047760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.050679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.050728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.050742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.053633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.053682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.053695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.056610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.056652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.056666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.059593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.059701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.059715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.062666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.062714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.062728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.065701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.065745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.065759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.068702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.068746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.068760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.071635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.071683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.071697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.074632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.074733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.074747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.077733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.077778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.077792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.080785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.080830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.080844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.083845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.083892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.083906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.086897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.086943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.086957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.089929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.090029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.921 [2024-11-20 07:26:23.090043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.921 [2024-11-20 07:26:23.093053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.921 [2024-11-20 07:26:23.093090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.922 [2024-11-20 07:26:23.093104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.922 [2024-11-20 07:26:23.096088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.922 [2024-11-20 07:26:23.096131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.922 [2024-11-20 07:26:23.096146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.922 [2024-11-20 07:26:23.099132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.922 [2024-11-20 07:26:23.099171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.922 [2024-11-20 07:26:23.099185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.922 [2024-11-20 07:26:23.102078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.922 [2024-11-20 07:26:23.102182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.922 [2024-11-20 07:26:23.102196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.922 [2024-11-20 07:26:23.105150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.922 [2024-11-20 07:26:23.105199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.922 [2024-11-20 07:26:23.105213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.922 [2024-11-20 07:26:23.108163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.922 [2024-11-20 07:26:23.108205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.922 [2024-11-20 07:26:23.108231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.922 [2024-11-20 07:26:23.111166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.922 [2024-11-20 07:26:23.111209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.922 [2024-11-20 07:26:23.111235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.922 [2024-11-20 07:26:23.114112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.922 [2024-11-20 07:26:23.114248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.922 [2024-11-20 07:26:23.114263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.922 [2024-11-20 07:26:23.117190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:58.922 [2024-11-20 07:26:23.117243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.922 [2024-11-20 07:26:23.117257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.120189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.120248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.120262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.123261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.123309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.123323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.126243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.126289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.126303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.129281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.129323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.129337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.132254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.132298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.132312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.135262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.135304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.135318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.138218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.138269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.138283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.141175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.141287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.141301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.144193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.144240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.144254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.147133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.147180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.147194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.150132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.150174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.150188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.153127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.153244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.153258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.182 [2024-11-20 07:26:23.156138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.182 [2024-11-20 07:26:23.156179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.182 [2024-11-20 07:26:23.156193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.159120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.159163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.159176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.162049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.162090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.162105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.164999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.165097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.165111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.168101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.168143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.168157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.171314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.171356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.171370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.174594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.174640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.174654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.177852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.177895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.177909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.180908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.181017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.181031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.184052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.184094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.184109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.187140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.187182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.187197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.190231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.190270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.190285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.193270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.193310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.193325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.196355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.196399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.196413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.199410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.199454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.199468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.202469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.202515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.202529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.205508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.205556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.205570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.208571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.208616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.208630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.211619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.211660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.211673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.214611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.214718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.214732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.217663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.217704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.217718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.220674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.220724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.220738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.223657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.223697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.223712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.226626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.183 [2024-11-20 07:26:23.226674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.183 [2024-11-20 07:26:23.226688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.183 [2024-11-20 07:26:23.229600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.229703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.229716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.232661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.232706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.232721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.235653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.235701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.235715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.238679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.238728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.238741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.241695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.241744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.241758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.244768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.244870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.244884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.247880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.247923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.247937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.250871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.250910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.250922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.253789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.253829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.253841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.256695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.256733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.256745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.259593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.259693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.259705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.262556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.262596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.262608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.265373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.265412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.265423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.268252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.268288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.268300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.271115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.271211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.271233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.274073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.274113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.274125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.276977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.277017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.277028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.279921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.279961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.279973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.282857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.282953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.282965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.285822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.285864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.285875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.288681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.288718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.288730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.291535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.291573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.291584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.294415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.294454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.294465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.297252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.297288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.184 [2024-11-20 07:26:23.297299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.184 [2024-11-20 07:26:23.300096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.184 [2024-11-20 07:26:23.300134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.185 [2024-11-20 07:26:23.300146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.185 [2024-11-20 07:26:23.302966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.185 [2024-11-20 07:26:23.303005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.185 [2024-11-20 07:26:23.303017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.185 [2024-11-20 07:26:23.305805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.185 [2024-11-20 07:26:23.305905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.185 [2024-11-20 07:26:23.305916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.185 [2024-11-20 07:26:23.308692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.185 [2024-11-20 07:26:23.308729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.185 [2024-11-20 07:26:23.308741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.185 [2024-11-20 07:26:23.311518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.185 [2024-11-20 07:26:23.311557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.185 [2024-11-20 07:26:23.311568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.185 [2024-11-20 07:26:23.314379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.185 [2024-11-20 07:26:23.314418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.185 [2024-11-20 07:26:23.314429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.185 [2024-11-20 07:26:23.317231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.185 [2024-11-20 07:26:23.317269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.185 [2024-11-20 07:26:23.317281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:59.185 [2024-11-20 07:26:23.320030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.185 [2024-11-20 07:26:23.320077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.185 [2024-11-20 07:26:23.320088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:59.185 [2024-11-20 07:26:23.322892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.185 [2024-11-20 07:26:23.322932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.185 [2024-11-20 07:26:23.322944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.185 [2024-11-20 07:26:23.325705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc278f0) with pdu=0x2000166ff3c8 00:28:59.185 [2024-11-20 07:26:23.326915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20384 len:32 10168.00 IOPS, 1271.00 MiB/s [2024-11-20T07:26:23.388Z] SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.185 [2024-11-20 07:26:23.327007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:59.185 00:28:59.185 Latency(us) 00:28:59.185 [2024-11-20T07:26:23.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.185 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:59.185 nvme0n1 : 2.00 10163.82 1270.48 0.00 0.00 1570.79 1039.75 11292.36 00:28:59.185 [2024-11-20T07:26:23.388Z] =================================================================================================================== 00:28:59.185 [2024-11-20T07:26:23.388Z] Total : 10163.82 1270.48 0.00 0.00 1570.79 1039.75 11292.36 00:28:59.185 { 00:28:59.185 "results": [ 00:28:59.185 { 00:28:59.185 "job": "nvme0n1", 00:28:59.185 "core_mask": "0x2", 00:28:59.185 "workload": "randwrite", 00:28:59.185 "status": "finished", 00:28:59.185 "queue_depth": 16, 00:28:59.185 "io_size": 131072, 00:28:59.185 "runtime": 2.002396, 00:28:59.185 "iops": 10163.823739160485, 00:28:59.185 "mibps": 1270.4779673950607, 00:28:59.185 "io_failed": 0, 00:28:59.185 "io_timeout": 0, 00:28:59.185 "avg_latency_us": 1570.7936139332367, 00:28:59.185 "min_latency_us": 1039.753846153846, 00:28:59.185 "max_latency_us": 11292.356923076923 00:28:59.185 } 00:28:59.185 ], 00:28:59.185 "core_count": 1 00:28:59.185 } 00:28:59.185 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:59.185 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:59.185 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:59.185 | .driver_specific 00:28:59.185 | .nvme_error 00:28:59.185 | .status_code 00:28:59.185 | .command_transient_transport_error' 00:28:59.185 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:59.444 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 657 > 0 )) 00:28:59.444 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79133 00:28:59.444 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79133 ']' 00:28:59.444 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79133 00:28:59.444 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:59.444 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.444 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79133 00:28:59.444 killing process with pid 79133 00:28:59.444 Received shutdown signal, test time was about 2.000000 seconds 00:28:59.444 00:28:59.444 Latency(us) 00:28:59.444 [2024-11-20T07:26:23.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.444 [2024-11-20T07:26:23.647Z] =================================================================================================================== 00:28:59.444 [2024-11-20T07:26:23.647Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.444 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:59.444 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:59.444 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79133' 00:28:59.444 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79133 00:28:59.444 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79133 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 78932 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 78932 ']' 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 78932 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78932 00:28:59.704 killing process with pid 78932 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78932' 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 78932 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 78932 00:28:59.704 00:28:59.704 real 0m16.481s 00:28:59.704 user 0m32.157s 00:28:59.704 sys 0m3.431s 00:28:59.704 ************************************ 00:28:59.704 END TEST nvmf_digest_error 00:28:59.704 ************************************ 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:59.704 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@99 -- # sync 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@102 -- # set +e 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:59.963 rmmod nvme_tcp 00:28:59.963 rmmod nvme_fabrics 00:28:59.963 rmmod nvme_keyring 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:59.963 Process with pid 78932 is not found 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@106 -- # set -e 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@107 -- # return 0 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # '[' -n 78932 ']' 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # killprocess 78932 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 78932 ']' 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 78932 00:28:59.963 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78932) - No such process 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 78932 is not found' 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # nvmf_fini 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@254 -- # local dev 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:28:59.963 07:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # continue 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # continue 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # _dev=0 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # dev_map=() 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@274 -- # iptr 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-save 00:28:59.963 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-restore 00:28:59.963 00:28:59.963 real 0m33.688s 00:28:59.963 user 1m4.163s 00:28:59.963 sys 0m7.271s 00:28:59.963 ************************************ 00:28:59.963 END TEST nvmf_digest 00:28:59.964 ************************************ 00:28:59.964 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.964 07:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:59.964 07:26:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:59.964 07:26:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:28:59.964 07:26:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:28:59.964 07:26:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:59.964 07:26:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.964 07:26:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.964 ************************************ 00:28:59.964 START TEST nvmf_host_multipath 00:28:59.964 ************************************ 00:28:59.964 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:29:00.226 * Looking for test storage... 00:29:00.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:00.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.226 --rc genhtml_branch_coverage=1 00:29:00.226 --rc genhtml_function_coverage=1 00:29:00.226 --rc genhtml_legend=1 00:29:00.226 --rc geninfo_all_blocks=1 00:29:00.226 --rc geninfo_unexecuted_blocks=1 00:29:00.226 00:29:00.226 ' 00:29:00.226 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:00.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.226 --rc genhtml_branch_coverage=1 00:29:00.226 --rc genhtml_function_coverage=1 00:29:00.226 --rc genhtml_legend=1 00:29:00.226 --rc geninfo_all_blocks=1 00:29:00.226 --rc geninfo_unexecuted_blocks=1 00:29:00.226 00:29:00.227 ' 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:00.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.227 --rc genhtml_branch_coverage=1 00:29:00.227 --rc genhtml_function_coverage=1 00:29:00.227 --rc genhtml_legend=1 00:29:00.227 --rc geninfo_all_blocks=1 00:29:00.227 --rc geninfo_unexecuted_blocks=1 00:29:00.227 00:29:00.227 ' 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:00.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.227 --rc genhtml_branch_coverage=1 00:29:00.227 --rc genhtml_function_coverage=1 00:29:00.227 --rc genhtml_legend=1 00:29:00.227 --rc geninfo_all_blocks=1 00:29:00.227 --rc geninfo_unexecuted_blocks=1 00:29:00.227 00:29:00.227 ' 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@50 -- # : 0 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:00.227 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:00.227 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@280 -- # nvmf_veth_init 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@223 -- # create_target_ns 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@224 -- # create_main_bridge 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@105 -- # delete_main_bridge 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@121 -- # return 0 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # ips=() 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up initiator0 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up target0 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target0 up 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up target0_br 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # add_to_ns target0 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:29:00.228 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:29:00.229 10.0.0.1 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:29:00.229 10.0.0.2 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@66 -- # set_up initiator0 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:29:00.229 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:29:00.491 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up target0_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # ips=() 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up initiator1 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up target1 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target1 up 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up target1_br 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # add_to_ns target1 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772163 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:29:00.492 10.0.0.3 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772164 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:29:00.492 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:29:00.493 10.0.0.4 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@66 -- # set_up initiator1 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up target1_br 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@38 -- # ping_ips 2 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:00.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:29:00.493 00:29:00.493 --- 10.0.0.1 ping statistics --- 00:29:00.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.493 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target0 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:00.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.024 ms 00:29:00.493 00:29:00.493 --- 10.0.0.2 ping statistics --- 00:29:00.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.493 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:00.493 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:29:00.494 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:00.494 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:29:00.494 00:29:00.494 --- 10.0.0.3 ping statistics --- 00:29:00.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.494 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:29:00.494 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:00.494 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.102 ms 00:29:00.494 00:29:00.494 --- 10.0.0.4 ping statistics --- 00:29:00.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.494 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@281 -- # return 0 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target0 00:29:00.494 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target1 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:29:00.495 ' 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:00.495 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@328 -- # nvmfpid=79443 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@329 -- # waitforlisten 79443 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 79443 ']' 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.755 07:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:00.755 [2024-11-20 07:26:24.734035] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:00.755 [2024-11-20 07:26:24.734092] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.755 [2024-11-20 07:26:24.872916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:00.755 [2024-11-20 07:26:24.907698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.755 [2024-11-20 07:26:24.907739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.755 [2024-11-20 07:26:24.907745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.755 [2024-11-20 07:26:24.907751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.755 [2024-11-20 07:26:24.907755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.755 [2024-11-20 07:26:24.908444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.755 [2024-11-20 07:26:24.908675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.755 [2024-11-20 07:26:24.938943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:01.689 07:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.689 07:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:29:01.689 07:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:01.689 07:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.689 07:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:01.689 07:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.689 07:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=79443 00:29:01.689 07:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:01.689 [2024-11-20 07:26:25.830346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.689 07:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:01.947 Malloc0 00:29:01.947 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:02.206 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:02.464 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.721 [2024-11-20 07:26:26.670134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.721 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:02.721 [2024-11-20 07:26:26.878217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:02.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:02.721 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=79493 00:29:02.721 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:02.721 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:02.721 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 79493 /var/tmp/bdevperf.sock 00:29:02.721 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 79493 ']' 00:29:02.721 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:02.721 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.721 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:02.721 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.722 07:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:03.714 07:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.714 07:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:29:03.714 07:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:03.972 07:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:04.231 Nvme0n1 00:29:04.231 07:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:04.487 Nvme0n1 00:29:04.487 07:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:29:04.487 07:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:05.419 07:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:29:05.419 07:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:05.677 07:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:05.935 07:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:29:05.935 07:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79538 00:29:05.935 07:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:05.935 07:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79443 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:12.508 07:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:12.508 07:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:12.508 Attaching 4 probes... 00:29:12.508 @path[10.0.0.2, 4421]: 26314 00:29:12.508 @path[10.0.0.2, 4421]: 26597 00:29:12.508 @path[10.0.0.2, 4421]: 26807 00:29:12.508 @path[10.0.0.2, 4421]: 26494 00:29:12.508 @path[10.0.0.2, 4421]: 26437 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79538 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79443 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79651 00:29:12.508 07:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:19.067 Attaching 4 probes... 00:29:19.067 @path[10.0.0.2, 4420]: 25249 00:29:19.067 @path[10.0.0.2, 4420]: 25320 00:29:19.067 @path[10.0.0.2, 4420]: 25834 00:29:19.067 @path[10.0.0.2, 4420]: 25644 00:29:19.067 @path[10.0.0.2, 4420]: 25657 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79651 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:29:19.067 07:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:19.067 07:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:19.067 07:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:29:19.067 07:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79443 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:19.067 07:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79770 00:29:19.067 07:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:25.623 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:25.623 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:25.623 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:29:25.623 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:25.623 Attaching 4 probes... 00:29:25.623 @path[10.0.0.2, 4421]: 16164 00:29:25.623 @path[10.0.0.2, 4421]: 26589 00:29:25.623 @path[10.0.0.2, 4421]: 26740 00:29:25.623 @path[10.0.0.2, 4421]: 26665 00:29:25.623 @path[10.0.0.2, 4421]: 26298 00:29:25.623 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:25.623 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:25.623 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:25.623 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:29:25.623 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:25.623 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:25.623 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79770 00:29:25.623 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:25.624 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:29:25.624 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:25.624 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:25.928 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:29:25.928 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79887 00:29:25.928 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:25.928 07:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79443 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:32.537 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:32.537 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:32.537 Attaching 4 probes... 00:29:32.537 00:29:32.537 00:29:32.537 00:29:32.537 00:29:32.537 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79887 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80005 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:32.537 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79443 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:39.094 Attaching 4 probes... 00:29:39.094 @path[10.0.0.2, 4421]: 25547 00:29:39.094 @path[10.0.0.2, 4421]: 26075 00:29:39.094 @path[10.0.0.2, 4421]: 25972 00:29:39.094 @path[10.0.0.2, 4421]: 25819 00:29:39.094 @path[10.0.0.2, 4421]: 25514 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80005 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:39.094 07:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:29:40.025 07:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:29:40.025 07:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80130 00:29:40.025 07:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:40.025 07:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79443 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:46.576 07:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:46.576 07:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:46.576 Attaching 4 probes... 00:29:46.576 @path[10.0.0.2, 4420]: 24876 00:29:46.576 @path[10.0.0.2, 4420]: 25000 00:29:46.576 @path[10.0.0.2, 4420]: 25344 00:29:46.576 @path[10.0.0.2, 4420]: 25325 00:29:46.576 @path[10.0.0.2, 4420]: 25425 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80130 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:46.576 [2024-11-20 07:27:10.389184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:46.576 07:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:29:53.121 07:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:29:53.121 07:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80313 00:29:53.121 07:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:53.121 07:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79443 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:59.692 Attaching 4 probes... 00:29:59.692 @path[10.0.0.2, 4421]: 25775 00:29:59.692 @path[10.0.0.2, 4421]: 26144 00:29:59.692 @path[10.0.0.2, 4421]: 26198 00:29:59.692 @path[10.0.0.2, 4421]: 20407 00:29:59.692 @path[10.0.0.2, 4421]: 19043 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80313 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 79493 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 79493 ']' 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 79493 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79493 00:29:59.692 killing process with pid 79493 00:29:59.692 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:59.693 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:59.693 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79493' 00:29:59.693 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 79493 00:29:59.693 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 79493 00:29:59.693 { 00:29:59.693 "results": [ 00:29:59.693 { 00:29:59.693 "job": "Nvme0n1", 00:29:59.693 "core_mask": "0x4", 00:29:59.693 "workload": "verify", 00:29:59.693 "status": "terminated", 00:29:59.693 "verify_range": { 00:29:59.693 "start": 0, 00:29:59.693 "length": 16384 00:29:59.693 }, 00:29:59.693 "queue_depth": 128, 00:29:59.693 "io_size": 4096, 00:29:59.693 "runtime": 54.226169, 00:29:59.693 "iops": 10794.308556077418, 00:29:59.693 "mibps": 42.165267797177414, 00:29:59.693 "io_failed": 0, 00:29:59.693 "io_timeout": 0, 00:29:59.693 "avg_latency_us": 11835.298947088986, 00:29:59.693 "min_latency_us": 538.7815384615385, 00:29:59.693 "max_latency_us": 7020619.618461538 00:29:59.693 } 00:29:59.693 ], 00:29:59.693 "core_count": 1 00:29:59.693 } 00:29:59.693 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 79493 00:29:59.693 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:59.693 [2024-11-20 07:26:26.927486] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:59.693 [2024-11-20 07:26:26.927557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79493 ] 00:29:59.693 [2024-11-20 07:26:27.068195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.693 [2024-11-20 07:26:27.118542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.693 [2024-11-20 07:26:27.152230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:59.693 Running I/O for 90 seconds... 00:29:59.693 9877.00 IOPS, 38.58 MiB/s [2024-11-20T07:27:23.896Z] 11465.50 IOPS, 44.79 MiB/s [2024-11-20T07:27:23.896Z] 12121.00 IOPS, 47.35 MiB/s [2024-11-20T07:27:23.896Z] 12410.75 IOPS, 48.48 MiB/s [2024-11-20T07:27:23.896Z] 12607.00 IOPS, 49.25 MiB/s [2024-11-20T07:27:23.896Z] 12716.50 IOPS, 49.67 MiB/s [2024-11-20T07:27:23.896Z] 12789.00 IOPS, 49.96 MiB/s [2024-11-20T07:27:23.896Z] [2024-11-20 07:26:36.592952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.693 [2024-11-20 07:26:36.593349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-11-20 07:26:36.593369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-11-20 07:26:36.593390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-11-20 07:26:36.593410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-11-20 07:26:36.593429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-11-20 07:26:36.593448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-11-20 07:26:36.593467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-11-20 07:26:36.593491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-11-20 07:26:36.593511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-11-20 07:26:36.593531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-11-20 07:26:36.593550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-11-20 07:26:36.593570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:59.693 [2024-11-20 07:26:36.593582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.593589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.593609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.593628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.593648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.593667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.593988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.694 [2024-11-20 07:26:36.593995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:59.694 [2024-11-20 07:26:36.594326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-11-20 07:26:36.594333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-11-20 07:26:36.594532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-11-20 07:26:36.594552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-11-20 07:26:36.594571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-11-20 07:26:36.594591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-11-20 07:26:36.594612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-11-20 07:26:36.594631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-11-20 07:26:36.594651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-11-20 07:26:36.594671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.594983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.594993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.595011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.595019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.595031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.595038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.595052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.595060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:59.695 [2024-11-20 07:26:36.595073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.695 [2024-11-20 07:26:36.595079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:36.595099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:36.595117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:36.595137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:36.595157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.595453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.595465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.596540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.696 [2024-11-20 07:26:36.596566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.596593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:36.596601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.596614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:36.596621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.596634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:36.596641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.596653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:36.596660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.596673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:36.596680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.596692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:36.596699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.596712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:36.596719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:36.596738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:36.596745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:59.696 12829.50 IOPS, 50.12 MiB/s [2024-11-20T07:27:23.899Z] 12812.00 IOPS, 50.05 MiB/s [2024-11-20T07:27:23.899Z] 12798.00 IOPS, 49.99 MiB/s [2024-11-20T07:27:23.899Z] 12805.45 IOPS, 50.02 MiB/s [2024-11-20T07:27:23.899Z] 12809.00 IOPS, 50.04 MiB/s [2024-11-20T07:27:23.899Z] 12810.77 IOPS, 50.04 MiB/s [2024-11-20T07:27:23.899Z] 12805.43 IOPS, 50.02 MiB/s [2024-11-20T07:27:23.899Z] [2024-11-20 07:26:43.027882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:43.027927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:43.027959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:43.027967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:43.027980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:43.027987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:43.028000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:43.028028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:43.028040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:43.028047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:43.028060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:43.028066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:43.028079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:43.028086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:59.696 [2024-11-20 07:26:43.028098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.696 [2024-11-20 07:26:43.028105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.697 [2024-11-20 07:26:43.028127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.697 [2024-11-20 07:26:43.028146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.697 [2024-11-20 07:26:43.028164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.697 [2024-11-20 07:26:43.028184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.697 [2024-11-20 07:26:43.028202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.697 [2024-11-20 07:26:43.028232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.697 [2024-11-20 07:26:43.028252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.697 [2024-11-20 07:26:43.028275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.697 [2024-11-20 07:26:43.028589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:59.697 [2024-11-20 07:26:43.028761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.028772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.028788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.028795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.028809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.028816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.028830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.028837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.028851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.028858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.028872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.028879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.028893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.028899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.028913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.028920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.028939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.028946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.028961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.028968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.028982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.028989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.698 [2024-11-20 07:26:43.029276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.029418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.029441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.029462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.029485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.029507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.029529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:59.698 [2024-11-20 07:26:43.029545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.698 [2024-11-20 07:26:43.029552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.029575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.029601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.029624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.029647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.029669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.029691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.029713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.029735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.029758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.029780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.029802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.029824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.029846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.029878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.029901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.029923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.029949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.029972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.029988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.029995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.030018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.030040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.030062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.030085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.030107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.699 [2024-11-20 07:26:43.030129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.030153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.030179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.030201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.030231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.030253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.030275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.030298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.030321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.030344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.699 [2024-11-20 07:26:43.030366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:59.699 [2024-11-20 07:26:43.030381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:43.030869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.030981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.030996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.031005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.031021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.031028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:43.031044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.700 [2024-11-20 07:26:43.031052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:59.700 12266.40 IOPS, 47.92 MiB/s [2024-11-20T07:27:23.903Z] 12018.12 IOPS, 46.95 MiB/s [2024-11-20T07:27:23.903Z] 12093.29 IOPS, 47.24 MiB/s [2024-11-20T07:27:23.903Z] 12164.11 IOPS, 47.52 MiB/s [2024-11-20T07:27:23.903Z] 12225.79 IOPS, 47.76 MiB/s [2024-11-20T07:27:23.903Z] 12272.10 IOPS, 47.94 MiB/s [2024-11-20T07:27:23.903Z] 12314.00 IOPS, 48.10 MiB/s [2024-11-20T07:27:23.903Z] [2024-11-20 07:26:49.844529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:49.844576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:49.844606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.700 [2024-11-20 07:26:49.844615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:59.700 [2024-11-20 07:26:49.844627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.844895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.701 [2024-11-20 07:26:49.844915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.701 [2024-11-20 07:26:49.844937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.701 [2024-11-20 07:26:49.844956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.701 [2024-11-20 07:26:49.844975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.844987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.701 [2024-11-20 07:26:49.844993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.701 [2024-11-20 07:26:49.845016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.701 [2024-11-20 07:26:49.845035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.701 [2024-11-20 07:26:49.845054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.701 [2024-11-20 07:26:49.845331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:59.701 [2024-11-20 07:26:49.845344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.702 [2024-11-20 07:26:49.845351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.702 [2024-11-20 07:26:49.845371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.702 [2024-11-20 07:26:49.845391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.702 [2024-11-20 07:26:49.845871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.702 [2024-11-20 07:26:49.845898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.702 [2024-11-20 07:26:49.845918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.702 [2024-11-20 07:26:49.845939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:59.702 [2024-11-20 07:26:49.845952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.702 [2024-11-20 07:26:49.845959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.845971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.703 [2024-11-20 07:26:49.845979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.845991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.703 [2024-11-20 07:26:49.845998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.703 [2024-11-20 07:26:49.846021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.703 [2024-11-20 07:26:49.846041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.703 [2024-11-20 07:26:49.846390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.703 [2024-11-20 07:26:49.846410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.703 [2024-11-20 07:26:49.846430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.703 [2024-11-20 07:26:49.846449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.703 [2024-11-20 07:26:49.846469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.703 [2024-11-20 07:26:49.846488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.703 [2024-11-20 07:26:49.846511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.703 [2024-11-20 07:26:49.846530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.703 [2024-11-20 07:26:49.846708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:59.703 [2024-11-20 07:26:49.846721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:26:49.846728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.846740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:26:49.846750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.846763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:26:49.846769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.846782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:26:49.846790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.846803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:26:49.846810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.846822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:26:49.846829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:26:49.847317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:26:49.847779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:26:49.847786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:59.704 11866.27 IOPS, 46.35 MiB/s [2024-11-20T07:27:23.907Z] 11350.35 IOPS, 44.34 MiB/s [2024-11-20T07:27:23.907Z] 10877.42 IOPS, 42.49 MiB/s [2024-11-20T07:27:23.907Z] 10442.32 IOPS, 40.79 MiB/s [2024-11-20T07:27:23.907Z] 10040.69 IOPS, 39.22 MiB/s [2024-11-20T07:27:23.907Z] 9668.81 IOPS, 37.77 MiB/s [2024-11-20T07:27:23.907Z] 9323.50 IOPS, 36.42 MiB/s [2024-11-20T07:27:23.907Z] 9358.62 IOPS, 36.56 MiB/s [2024-11-20T07:27:23.907Z] 9481.33 IOPS, 37.04 MiB/s [2024-11-20T07:27:23.907Z] 9594.32 IOPS, 37.48 MiB/s [2024-11-20T07:27:23.907Z] 9698.50 IOPS, 37.88 MiB/s [2024-11-20T07:27:23.907Z] 9792.48 IOPS, 38.25 MiB/s [2024-11-20T07:27:23.907Z] 9885.41 IOPS, 38.61 MiB/s [2024-11-20T07:27:23.907Z] [2024-11-20 07:27:02.941080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:27:02.941122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:27:02.941137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:27:02.941145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:27:02.941173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:27:02.941180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:27:02.941189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:27:02.941195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:27:02.941204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:27:02.941211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:27:02.941219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:27:02.941236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:27:02.941245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:27:02.941251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:27:02.941260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.704 [2024-11-20 07:27:02.941268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:27:02.941277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:27:02.941284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:27:02.941292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.704 [2024-11-20 07:27:02.941299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.704 [2024-11-20 07:27:02.941308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.705 [2024-11-20 07:27:02.941531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.705 [2024-11-20 07:27:02.941851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.705 [2024-11-20 07:27:02.941859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.941866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.941874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.941880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.941889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.941895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.941903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.941910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.941918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.941924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.941933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.941939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.941948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.941954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.941962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.941971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.941979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.941986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.941994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.942001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.942016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.942284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.942299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.942313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.942328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.942343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.942360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.942375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.706 [2024-11-20 07:27:02.942390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.706 [2024-11-20 07:27:02.942442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.706 [2024-11-20 07:27:02.942449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.707 [2024-11-20 07:27:02.942872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 07:27:02.942917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 07:27:02.942925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 07:27:02.942936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.942944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 07:27:02.942950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.942958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 07:27:02.942965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.942973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 07:27:02.942982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.942991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 07:27:02.942997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 07:27:02.943012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 07:27:02.943027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 07:27:02.943041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 07:27:02.943056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 07:27:02.943071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 07:27:02.943085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 07:27:02.943100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.708 [2024-11-20 07:27:02.943144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.708 [2024-11-20 07:27:02.943149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97464 len:8 PRP1 0x0 PRP2 0x0 00:29:59.708 [2024-11-20 07:27:02.943159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.708 [2024-11-20 07:27:02.943264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.708 [2024-11-20 07:27:02.943279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.708 [2024-11-20 07:27:02.943293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.708 [2024-11-20 07:27:02.943307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 07:27:02.943314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c1d0 is same with the state(6) to be set 00:29:59.708 [2024-11-20 07:27:02.944127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.708 [2024-11-20 07:27:02.944147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110c1d0 (9): Bad file descriptor 00:29:59.708 [2024-11-20 07:27:02.944363] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.708 [2024-11-20 07:27:02.944379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110c1d0 with addr=10.0.0.2, port=4421 00:29:59.708 [2024-11-20 07:27:02.944387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c1d0 is same with the state(6) to be set 00:29:59.708 [2024-11-20 07:27:02.944402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110c1d0 (9): Bad file descriptor 00:29:59.708 [2024-11-20 07:27:02.944417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.708 [2024-11-20 07:27:02.944424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.708 [2024-11-20 07:27:02.944432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.708 [2024-11-20 07:27:02.944439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.708 [2024-11-20 07:27:02.944446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.708 9966.43 IOPS, 38.93 MiB/s [2024-11-20T07:27:23.911Z] 10031.81 IOPS, 39.19 MiB/s [2024-11-20T07:27:23.911Z] 10102.95 IOPS, 39.46 MiB/s [2024-11-20T07:27:23.911Z] 10165.71 IOPS, 39.71 MiB/s [2024-11-20T07:27:23.911Z] 10231.41 IOPS, 39.97 MiB/s [2024-11-20T07:27:23.911Z] 10291.42 IOPS, 40.20 MiB/s [2024-11-20T07:27:23.911Z] 10350.27 IOPS, 40.43 MiB/s [2024-11-20T07:27:23.911Z] 10402.12 IOPS, 40.63 MiB/s [2024-11-20T07:27:23.911Z] 10452.30 IOPS, 40.83 MiB/s [2024-11-20T07:27:23.911Z] 10502.18 IOPS, 41.02 MiB/s [2024-11-20T07:27:23.911Z] [2024-11-20 07:27:13.006619] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:29:59.708 10552.87 IOPS, 41.22 MiB/s [2024-11-20T07:27:23.911Z] 10604.83 IOPS, 41.43 MiB/s [2024-11-20T07:27:23.911Z] 10654.26 IOPS, 41.62 MiB/s [2024-11-20T07:27:23.911Z] 10700.96 IOPS, 41.80 MiB/s [2024-11-20T07:27:23.911Z] 10743.80 IOPS, 41.97 MiB/s [2024-11-20T07:27:23.911Z] 10791.96 IOPS, 42.16 MiB/s [2024-11-20T07:27:23.911Z] 10837.45 IOPS, 42.33 MiB/s [2024-11-20T07:27:23.911Z] 10850.12 IOPS, 42.38 MiB/s [2024-11-20T07:27:23.911Z] 10824.72 IOPS, 42.28 MiB/s [2024-11-20T07:27:23.911Z] 10800.85 IOPS, 42.19 MiB/s [2024-11-20T07:27:23.911Z] Received shutdown signal, test time was about 54.226845 seconds 00:29:59.708 00:29:59.708 Latency(us) 00:29:59.708 [2024-11-20T07:27:23.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.708 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:59.708 Verification LBA range: start 0x0 length 0x4000 00:29:59.708 Nvme0n1 : 54.23 10794.31 42.17 0.00 0.00 11835.30 538.78 7020619.62 00:29:59.708 [2024-11-20T07:27:23.911Z] =================================================================================================================== 00:29:59.708 [2024-11-20T07:27:23.911Z] Total : 10794.31 42.17 0.00 0.00 11835.30 538.78 7020619.62 00:29:59.708 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@99 -- # sync 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@102 -- # set +e 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:59.708 rmmod nvme_tcp 00:29:59.708 rmmod nvme_fabrics 00:29:59.708 rmmod nvme_keyring 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@106 -- # set -e 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@107 -- # return 0 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@336 -- # '[' -n 79443 ']' 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@337 -- # killprocess 79443 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 79443 ']' 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 79443 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79443 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:59.708 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:59.708 killing process with pid 79443 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79443' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 79443 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 79443 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@254 -- # local dev 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # continue 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # continue 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@274 -- # iptr 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@548 -- # iptables-save 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:59.709 00:29:59.709 real 0m59.439s 00:29:59.709 user 2m47.383s 00:29:59.709 sys 0m13.913s 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:59.709 ************************************ 00:29:59.709 END TEST nvmf_host_multipath 00:29:59.709 ************************************ 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.709 ************************************ 00:29:59.709 START TEST nvmf_timeout 00:29:59.709 ************************************ 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:29:59.709 * Looking for test storage... 00:29:59.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:59.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.709 --rc genhtml_branch_coverage=1 00:29:59.709 --rc genhtml_function_coverage=1 00:29:59.709 --rc genhtml_legend=1 00:29:59.709 --rc geninfo_all_blocks=1 00:29:59.709 --rc geninfo_unexecuted_blocks=1 00:29:59.709 00:29:59.709 ' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:59.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.709 --rc genhtml_branch_coverage=1 00:29:59.709 --rc genhtml_function_coverage=1 00:29:59.709 --rc genhtml_legend=1 00:29:59.709 --rc geninfo_all_blocks=1 00:29:59.709 --rc geninfo_unexecuted_blocks=1 00:29:59.709 00:29:59.709 ' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:59.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.709 --rc genhtml_branch_coverage=1 00:29:59.709 --rc genhtml_function_coverage=1 00:29:59.709 --rc genhtml_legend=1 00:29:59.709 --rc geninfo_all_blocks=1 00:29:59.709 --rc geninfo_unexecuted_blocks=1 00:29:59.709 00:29:59.709 ' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:59.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.709 --rc genhtml_branch_coverage=1 00:29:59.709 --rc genhtml_function_coverage=1 00:29:59.709 --rc genhtml_legend=1 00:29:59.709 --rc geninfo_all_blocks=1 00:29:59.709 --rc geninfo_unexecuted_blocks=1 00:29:59.709 00:29:59.709 ' 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:29:59.709 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@50 -- # : 0 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:59.710 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@260 -- # remove_target_ns 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@280 -- # nvmf_veth_init 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@223 -- # create_target_ns 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@224 -- # create_main_bridge 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@105 -- # delete_main_bridge 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@121 -- # return 0 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@28 -- # local -g _dev 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # ips=() 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:29:59.710 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up initiator0 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up target0 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target0 up 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up target0_br 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # add_to_ns target0 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772161 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:29:59.711 10.0.0.1 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772162 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:29:59.711 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:29:59.972 10.0.0.2 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@66 -- # set_up initiator0 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up target0_br 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.972 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # ips=() 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up initiator1 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up target1 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target1 up 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up target1_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # add_to_ns target1 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772163 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:29:59.973 10.0.0.3 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772164 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:29:59.973 10.0.0.4 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@66 -- # set_up initiator1 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:29:59.973 07:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up target1_br 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@38 -- # ping_ips 2 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:59.973 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator0 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator0 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:59.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:29:59.974 00:29:59.974 --- 10.0.0.1 ping statistics --- 00:29:59.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.974 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target0 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target0 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target0 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:59.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.016 ms 00:29:59.974 00:29:59.974 --- 10.0.0.2 ping statistics --- 00:29:59.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.974 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:29:59.974 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:59.974 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:29:59.974 00:29:59.974 --- 10.0.0.3 ping statistics --- 00:29:59.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.974 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:29:59.974 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:59.974 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.106 ms 00:29:59.974 00:29:59.974 --- 10.0.0.4 ping statistics --- 00:29:59.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.974 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@281 -- # return 0 00:29:59.974 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator0 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator0 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target0 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target0 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target0 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target1 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:29:59.975 ' 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@328 -- # nvmfpid=80668 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@329 -- # waitforlisten 80668 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 80668 ']' 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:59.975 07:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:00.234 [2024-11-20 07:27:24.175235] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:00.234 [2024-11-20 07:27:24.175279] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.234 [2024-11-20 07:27:24.310305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:00.234 [2024-11-20 07:27:24.339612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.234 [2024-11-20 07:27:24.339647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.234 [2024-11-20 07:27:24.339652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.234 [2024-11-20 07:27:24.339656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.234 [2024-11-20 07:27:24.339660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.234 [2024-11-20 07:27:24.340251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.234 [2024-11-20 07:27:24.340252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.234 [2024-11-20 07:27:24.368366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:01.178 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.178 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:30:01.178 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:01.178 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.178 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:01.178 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.178 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:01.178 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:01.178 [2024-11-20 07:27:25.265708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.178 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:01.450 Malloc0 00:30:01.450 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.707 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.708 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.966 [2024-11-20 07:27:25.967041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.966 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=80712 00:30:01.966 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:01.966 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 80712 /var/tmp/bdevperf.sock 00:30:01.966 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 80712 ']' 00:30:01.966 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:01.966 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:01.966 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:01.966 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.966 07:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:01.966 [2024-11-20 07:27:26.016330] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:01.966 [2024-11-20 07:27:26.016391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80712 ] 00:30:01.966 [2024-11-20 07:27:26.155005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.224 [2024-11-20 07:27:26.190948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.224 [2024-11-20 07:27:26.221889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:02.789 07:27:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.789 07:27:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:30:02.789 07:27:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:03.046 07:27:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:30:03.304 NVMe0n1 00:30:03.304 07:27:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=80735 00:30:03.304 07:27:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:03.304 07:27:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:30:03.304 Running I/O for 10 seconds... 00:30:04.239 07:27:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:04.499 12958.00 IOPS, 50.62 MiB/s [2024-11-20T07:27:28.702Z] [2024-11-20 07:27:28.552647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.499 [2024-11-20 07:27:28.552693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.499 [2024-11-20 07:27:28.552714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.499 [2024-11-20 07:27:28.552727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.499 [2024-11-20 07:27:28.552740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.499 [2024-11-20 07:27:28.552754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.499 [2024-11-20 07:27:28.552767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.499 [2024-11-20 07:27:28.552780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.499 [2024-11-20 07:27:28.552793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.499 [2024-11-20 07:27:28.552805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.499 [2024-11-20 07:27:28.552818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.499 [2024-11-20 07:27:28.552830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.499 [2024-11-20 07:27:28.552843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.499 [2024-11-20 07:27:28.552856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.499 [2024-11-20 07:27:28.552868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.499 [2024-11-20 07:27:28.552881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.499 [2024-11-20 07:27:28.552894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.499 [2024-11-20 07:27:28.552906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.499 [2024-11-20 07:27:28.552921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.499 [2024-11-20 07:27:28.552934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.499 [2024-11-20 07:27:28.552942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.499 [2024-11-20 07:27:28.552947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.552955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.552960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.552967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.552973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.552981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.552986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.552994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.552999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.500 [2024-11-20 07:27:28.553406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.500 [2024-11-20 07:27:28.553471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.500 [2024-11-20 07:27:28.553478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.501 [2024-11-20 07:27:28.553947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.501 [2024-11-20 07:27:28.553987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.501 [2024-11-20 07:27:28.553992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.502 [2024-11-20 07:27:28.554159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.502 [2024-11-20 07:27:28.554364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.502 [2024-11-20 07:27:28.554410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.502 [2024-11-20 07:27:28.554416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114536 len:8 PRP1 0x0 PRP2 0x0 00:30:04.502 [2024-11-20 07:27:28.554423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.502 [2024-11-20 07:27:28.554686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:04.502 [2024-11-20 07:27:28.554752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1e50 (9): Bad file descriptor 00:30:04.502 [2024-11-20 07:27:28.554817] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.502 [2024-11-20 07:27:28.554832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1e50 with addr=10.0.0.2, port=4420 00:30:04.502 [2024-11-20 07:27:28.554839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1e50 is same with the state(6) to be set 00:30:04.502 [2024-11-20 07:27:28.554849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1e50 (9): Bad file descriptor 00:30:04.502 [2024-11-20 07:27:28.554859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:04.502 [2024-11-20 07:27:28.554864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:04.502 [2024-11-20 07:27:28.554870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:04.502 [2024-11-20 07:27:28.554877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:04.502 [2024-11-20 07:27:28.554883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:04.502 07:27:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:30:06.368 7127.00 IOPS, 27.84 MiB/s [2024-11-20T07:27:30.571Z] 4751.33 IOPS, 18.56 MiB/s [2024-11-20T07:27:30.571Z] [2024-11-20 07:27:30.555077] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.368 [2024-11-20 07:27:30.555121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1e50 with addr=10.0.0.2, port=4420 00:30:06.368 [2024-11-20 07:27:30.555130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1e50 is same with the state(6) to be set 00:30:06.368 [2024-11-20 07:27:30.555143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1e50 (9): Bad file descriptor 00:30:06.368 [2024-11-20 07:27:30.555159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:06.368 [2024-11-20 07:27:30.555164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:06.368 [2024-11-20 07:27:30.555170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:06.368 [2024-11-20 07:27:30.555177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:06.368 [2024-11-20 07:27:30.555183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:06.626 07:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:30:06.626 07:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:06.626 07:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:06.626 07:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:30:06.626 07:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:30:06.626 07:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:06.626 07:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:06.883 07:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:30:06.883 07:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:30:08.470 3563.50 IOPS, 13.92 MiB/s [2024-11-20T07:27:32.673Z] 2850.80 IOPS, 11.14 MiB/s [2024-11-20T07:27:32.673Z] [2024-11-20 07:27:32.555416] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.470 [2024-11-20 07:27:32.555458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1e50 with addr=10.0.0.2, port=4420 00:30:08.470 [2024-11-20 07:27:32.555466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1e50 is same with the state(6) to be set 00:30:08.470 [2024-11-20 07:27:32.555480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1e50 (9): Bad file descriptor 00:30:08.470 [2024-11-20 07:27:32.555490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:08.470 [2024-11-20 07:27:32.555495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:08.470 [2024-11-20 07:27:32.555501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:08.470 [2024-11-20 07:27:32.555507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:08.470 [2024-11-20 07:27:32.555513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:10.360 2375.67 IOPS, 9.28 MiB/s [2024-11-20T07:27:34.563Z] 2036.29 IOPS, 7.95 MiB/s [2024-11-20T07:27:34.563Z] [2024-11-20 07:27:34.555685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:10.360 [2024-11-20 07:27:34.555717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:10.360 [2024-11-20 07:27:34.555725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:10.360 [2024-11-20 07:27:34.555730] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:30:10.360 [2024-11-20 07:27:34.555738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:11.550 1781.75 IOPS, 6.96 MiB/s 00:30:11.550 Latency(us) 00:30:11.550 [2024-11-20T07:27:35.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.550 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:11.550 Verification LBA range: start 0x0 length 0x4000 00:30:11.550 NVMe0n1 : 8.10 1759.65 6.87 15.80 0.00 71961.78 3024.74 7020619.62 00:30:11.550 [2024-11-20T07:27:35.753Z] =================================================================================================================== 00:30:11.550 [2024-11-20T07:27:35.753Z] Total : 1759.65 6.87 15.80 0.00 71961.78 3024.74 7020619.62 00:30:11.550 { 00:30:11.550 "results": [ 00:30:11.550 { 00:30:11.550 "job": "NVMe0n1", 00:30:11.550 "core_mask": "0x4", 00:30:11.550 "workload": "verify", 00:30:11.550 "status": "finished", 00:30:11.550 "verify_range": { 00:30:11.550 "start": 0, 00:30:11.550 "length": 16384 00:30:11.550 }, 00:30:11.550 "queue_depth": 128, 00:30:11.550 "io_size": 4096, 00:30:11.550 "runtime": 8.100481, 00:30:11.550 "iops": 1759.6485937069663, 00:30:11.550 "mibps": 6.873627319167837, 00:30:11.550 "io_failed": 128, 00:30:11.550 "io_timeout": 0, 00:30:11.550 "avg_latency_us": 71961.77761689293, 00:30:11.550 "min_latency_us": 3024.7384615384617, 00:30:11.550 "max_latency_us": 7020619.618461538 00:30:11.550 } 00:30:11.550 ], 00:30:11.550 "core_count": 1 00:30:11.550 } 00:30:11.808 07:27:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:30:11.808 07:27:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:11.808 07:27:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:12.065 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:30:12.065 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:30:12.065 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:12.065 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 80735 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 80712 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 80712 ']' 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 80712 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80712 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:12.324 killing process with pid 80712 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80712' 00:30:12.324 Received shutdown signal, test time was about 8.969571 seconds 00:30:12.324 00:30:12.324 Latency(us) 00:30:12.324 [2024-11-20T07:27:36.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.324 [2024-11-20T07:27:36.527Z] =================================================================================================================== 00:30:12.324 [2024-11-20T07:27:36.527Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 80712 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 80712 00:30:12.324 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.582 [2024-11-20 07:27:36.704497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.582 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=80852 00:30:12.582 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:12.582 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 80852 /var/tmp/bdevperf.sock 00:30:12.582 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 80852 ']' 00:30:12.582 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:12.582 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:12.582 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:12.582 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.582 07:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:12.582 [2024-11-20 07:27:36.750242] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:12.582 [2024-11-20 07:27:36.750298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80852 ] 00:30:12.840 [2024-11-20 07:27:36.885875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.840 [2024-11-20 07:27:36.917303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.840 [2024-11-20 07:27:36.945870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:13.772 07:27:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.772 07:27:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:30:13.772 07:27:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:13.772 07:27:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:30:14.029 NVMe0n1 00:30:14.029 07:27:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=80876 00:30:14.029 07:27:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:30:14.029 07:27:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:14.029 Running I/O for 10 seconds... 00:30:14.961 07:27:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.221 9493.00 IOPS, 37.08 MiB/s [2024-11-20T07:27:39.424Z] [2024-11-20 07:27:39.301805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.301997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14370a0 is same with the state(6) to be set 00:30:15.221 [2024-11-20 07:27:39.302842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.302872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.302885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.302890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.302897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.302902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.302909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.302913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.302920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.302925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.302931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.302935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.302941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.302946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.302952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.302956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.302962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.302967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.302973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.302977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.302983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.302987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.302993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.302998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.303004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.303008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.303015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.221 [2024-11-20 07:27:39.303019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-11-20 07:27:39.303025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-11-20 07:27:39.303743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-11-20 07:27:39.303748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-11-20 07:27:39.303759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-11-20 07:27:39.303770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-11-20 07:27:39.303780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-11-20 07:27:39.303790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-11-20 07:27:39.303801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-11-20 07:27:39.303811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-11-20 07:27:39.303824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-11-20 07:27:39.303834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-11-20 07:27:39.303844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-11-20 07:27:39.303854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-11-20 07:27:39.303865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.303876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.303886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.303896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.303906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.303918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.303928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.303938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.303949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.303959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.303969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.303979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.303992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.303998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.304002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.304013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-11-20 07:27:39.304023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-11-20 07:27:39.304033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211a1d0 is same with the state(6) to be set 00:30:15.223 [2024-11-20 07:27:39.304045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84128 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84256 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84264 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84272 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84280 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84288 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84296 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84304 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84312 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84320 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84328 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84336 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84344 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84352 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.304297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84360 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.304301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.304306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.304309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 07:27:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:30:15.223 [2024-11-20 07:27:39.318899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84368 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.318924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.318933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.318939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.318944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84376 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.318950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.318955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.318960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.318964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84384 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.318969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.318974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.318977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.318981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84392 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.318985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.318990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.223 [2024-11-20 07:27:39.318994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.223 [2024-11-20 07:27:39.318997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84400 len:8 PRP1 0x0 PRP2 0x0 00:30:15.223 [2024-11-20 07:27:39.319002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.319092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.223 [2024-11-20 07:27:39.319100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.319107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.223 [2024-11-20 07:27:39.319111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.319116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.223 [2024-11-20 07:27:39.319122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.319127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.223 [2024-11-20 07:27:39.319132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-11-20 07:27:39.319136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ace50 is same with the state(6) to be set 00:30:15.223 [2024-11-20 07:27:39.319324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.223 [2024-11-20 07:27:39.319343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ace50 (9): Bad file descriptor 00:30:15.223 [2024-11-20 07:27:39.319400] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-20 07:27:39.319415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ace50 with addr=10.0.0.2, port=4420 00:30:15.224 [2024-11-20 07:27:39.319420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ace50 is same with the state(6) to be set 00:30:15.224 [2024-11-20 07:27:39.319429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ace50 (9): Bad file descriptor 00:30:15.224 [2024-11-20 07:27:39.319437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.224 [2024-11-20 07:27:39.319441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.224 [2024-11-20 07:27:39.319448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.224 [2024-11-20 07:27:39.319453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.224 [2024-11-20 07:27:39.319458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:16.155 5211.50 IOPS, 20.36 MiB/s [2024-11-20T07:27:40.358Z] 07:27:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.155 [2024-11-20 07:27:40.319553] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 07:27:40.319590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ace50 with addr=10.0.0.2, port=4420 00:30:16.156 [2024-11-20 07:27:40.319599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ace50 is same with the state(6) to be set 00:30:16.156 [2024-11-20 07:27:40.319612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ace50 (9): Bad file descriptor 00:30:16.156 [2024-11-20 07:27:40.319622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:16.156 [2024-11-20 07:27:40.319626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:16.156 [2024-11-20 07:27:40.319632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:16.156 [2024-11-20 07:27:40.319639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:16.156 [2024-11-20 07:27:40.319644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:16.413 [2024-11-20 07:27:40.464675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.413 07:27:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 80876 00:30:17.346 3474.33 IOPS, 13.57 MiB/s [2024-11-20T07:27:41.549Z] [2024-11-20 07:27:41.337247] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:30:19.212 2605.75 IOPS, 10.18 MiB/s [2024-11-20T07:27:44.348Z] 4330.80 IOPS, 16.92 MiB/s [2024-11-20T07:27:45.280Z] 5742.33 IOPS, 22.43 MiB/s [2024-11-20T07:27:46.212Z] 6750.57 IOPS, 26.37 MiB/s [2024-11-20T07:27:47.585Z] 7506.75 IOPS, 29.32 MiB/s [2024-11-20T07:27:48.517Z] 8097.56 IOPS, 31.63 MiB/s [2024-11-20T07:27:48.517Z] 8576.30 IOPS, 33.50 MiB/s 00:30:24.314 Latency(us) 00:30:24.314 [2024-11-20T07:27:48.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.314 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:24.314 Verification LBA range: start 0x0 length 0x4000 00:30:24.314 NVMe0n1 : 10.01 8583.18 33.53 0.00 0.00 14884.19 1077.56 3032804.43 00:30:24.314 [2024-11-20T07:27:48.517Z] =================================================================================================================== 00:30:24.314 [2024-11-20T07:27:48.517Z] Total : 8583.18 33.53 0.00 0.00 14884.19 1077.56 3032804.43 00:30:24.314 { 00:30:24.314 "results": [ 00:30:24.314 { 00:30:24.314 "job": "NVMe0n1", 00:30:24.314 "core_mask": "0x4", 00:30:24.314 "workload": "verify", 00:30:24.314 "status": "finished", 00:30:24.314 "verify_range": { 00:30:24.314 "start": 0, 00:30:24.314 "length": 16384 00:30:24.314 }, 00:30:24.314 "queue_depth": 128, 00:30:24.314 "io_size": 4096, 00:30:24.314 "runtime": 10.006902, 00:30:24.314 "iops": 8583.175891999343, 00:30:24.314 "mibps": 33.52803082812243, 00:30:24.314 "io_failed": 0, 00:30:24.314 "io_timeout": 0, 00:30:24.314 "avg_latency_us": 14884.188688955503, 00:30:24.314 "min_latency_us": 1077.563076923077, 00:30:24.314 "max_latency_us": 3032804.4307692307 00:30:24.314 } 00:30:24.314 ], 00:30:24.314 "core_count": 1 00:30:24.314 } 00:30:24.314 07:27:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=80986 00:30:24.314 07:27:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:24.314 07:27:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:30:24.314 Running I/O for 10 seconds... 00:30:25.246 07:27:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:25.509 9876.00 IOPS, 38.58 MiB/s [2024-11-20T07:27:49.712Z] [2024-11-20 07:27:49.451393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.509 [2024-11-20 07:27:49.451433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.509 [2024-11-20 07:27:49.451438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.509 [2024-11-20 07:27:49.451443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.509 [2024-11-20 07:27:49.451447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.509 [2024-11-20 07:27:49.451450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.509 [2024-11-20 07:27:49.451455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.509 [2024-11-20 07:27:49.451459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.509 [2024-11-20 07:27:49.451462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.509 [2024-11-20 07:27:49.451466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.451526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14385d0 is same with the state(6) to be set 00:30:25.510 [2024-11-20 07:27:49.453552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.510 [2024-11-20 07:27:49.453860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.510 [2024-11-20 07:27:49.453866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.511 [2024-11-20 07:27:49.453870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.453876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.511 [2024-11-20 07:27:49.453881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.453887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.453892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.453897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.453902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.453908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.453912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.453918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.453922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.453929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.453933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.453940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.453945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.453951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.453955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.453961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.453966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.453972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.453976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.453982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.453986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.453993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.453997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.454008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.454018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.454029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.511 [2024-11-20 07:27:49.454039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.511 [2024-11-20 07:27:49.454050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b290 is same with the state(6) to be set 00:30:25.511 [2024-11-20 07:27:49.454062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.511 [2024-11-20 07:27:49.454065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.511 [2024-11-20 07:27:49.454070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89248 len:8 PRP1 0x0 PRP2 0x0 00:30:25.511 [2024-11-20 07:27:49.454074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.511 [2024-11-20 07:27:49.454083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.511 [2024-11-20 07:27:49.454087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89376 len:8 PRP1 0x0 PRP2 0x0 00:30:25.511 [2024-11-20 07:27:49.454091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.511 [2024-11-20 07:27:49.454100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.511 [2024-11-20 07:27:49.454104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89384 len:8 PRP1 0x0 PRP2 0x0 00:30:25.511 [2024-11-20 07:27:49.454108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.511 [2024-11-20 07:27:49.454117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.511 [2024-11-20 07:27:49.454120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89392 len:8 PRP1 0x0 PRP2 0x0 00:30:25.511 [2024-11-20 07:27:49.454125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.511 [2024-11-20 07:27:49.454134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.511 [2024-11-20 07:27:49.454137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89400 len:8 PRP1 0x0 PRP2 0x0 00:30:25.511 [2024-11-20 07:27:49.454142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.511 [2024-11-20 07:27:49.454150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.511 [2024-11-20 07:27:49.454153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89408 len:8 PRP1 0x0 PRP2 0x0 00:30:25.511 [2024-11-20 07:27:49.454158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.511 [2024-11-20 07:27:49.454166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.511 [2024-11-20 07:27:49.454170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89416 len:8 PRP1 0x0 PRP2 0x0 00:30:25.511 [2024-11-20 07:27:49.454174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.511 [2024-11-20 07:27:49.454183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.511 [2024-11-20 07:27:49.454187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89424 len:8 PRP1 0x0 PRP2 0x0 00:30:25.511 [2024-11-20 07:27:49.454192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.511 [2024-11-20 07:27:49.454200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.511 [2024-11-20 07:27:49.454204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89432 len:8 PRP1 0x0 PRP2 0x0 00:30:25.511 [2024-11-20 07:27:49.454208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.511 [2024-11-20 07:27:49.454216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.511 [2024-11-20 07:27:49.454229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89440 len:8 PRP1 0x0 PRP2 0x0 00:30:25.511 [2024-11-20 07:27:49.454235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.511 [2024-11-20 07:27:49.454240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.511 [2024-11-20 07:27:49.454243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.511 [2024-11-20 07:27:49.454247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89448 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89456 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89464 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89472 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89480 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89488 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89496 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89504 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89512 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89520 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89528 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89536 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89544 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89552 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89560 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89568 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89576 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89584 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89592 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89600 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.512 [2024-11-20 07:27:49.454583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.512 [2024-11-20 07:27:49.454586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89608 len:8 PRP1 0x0 PRP2 0x0 00:30:25.512 [2024-11-20 07:27:49.454591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.512 [2024-11-20 07:27:49.454595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89616 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89624 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89632 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89640 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89648 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89656 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89664 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89672 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89680 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89688 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89696 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89704 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89712 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89720 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89728 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89736 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89744 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89752 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.513 [2024-11-20 07:27:49.454900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.513 [2024-11-20 07:27:49.454905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.513 [2024-11-20 07:27:49.454909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89760 len:8 PRP1 0x0 PRP2 0x0 00:30:25.513 [2024-11-20 07:27:49.454913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.454918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.454921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.454925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89768 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.454930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.454934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.454938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.454941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89776 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.454946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.454951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.454954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.454958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89784 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.454962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.454969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.454972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.454976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89792 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.454980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.454985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.454989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.454993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89800 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.454997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.455002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.455005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.455010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89808 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.455014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.455019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.455022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.455026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89816 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.455031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.455036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.455040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.455044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89824 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.455049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.455054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.462240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.462266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89832 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.462273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.462282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.462286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.462290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89840 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.462295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.462302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.462305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.462309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89848 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.462314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.462319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.462322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.462326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89856 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.462330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.462335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.462338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.462342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89864 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.462347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.462351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.462355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.462359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89872 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.462364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.462369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.462372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.462376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89880 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.462380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.462385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.462389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.462393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89888 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.462397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.462402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.462405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.462409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89896 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.462413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.462417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.462421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.462424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89904 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.462430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.514 [2024-11-20 07:27:49.462434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.514 [2024-11-20 07:27:49.462437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.514 [2024-11-20 07:27:49.462441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89912 len:8 PRP1 0x0 PRP2 0x0 00:30:25.514 [2024-11-20 07:27:49.462446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89920 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89928 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89936 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89944 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89952 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89960 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89968 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89976 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89984 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89992 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90000 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90008 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90016 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90024 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.515 [2024-11-20 07:27:49.462682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.515 [2024-11-20 07:27:49.462686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90032 len:8 PRP1 0x0 PRP2 0x0 00:30:25.515 [2024-11-20 07:27:49.462690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.515 [2024-11-20 07:27:49.462784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.515 [2024-11-20 07:27:49.462796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.515 [2024-11-20 07:27:49.462805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.515 [2024-11-20 07:27:49.462816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-11-20 07:27:49.462820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ace50 is same with the state(6) to be set 00:30:25.515 [2024-11-20 07:27:49.462998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:25.515 [2024-11-20 07:27:49.463010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ace50 (9): Bad file descriptor 00:30:25.515 [2024-11-20 07:27:49.463068] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.515 [2024-11-20 07:27:49.463084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ace50 with addr=10.0.0.2, port=4420 00:30:25.515 [2024-11-20 07:27:49.463090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ace50 is same with the state(6) to be set 00:30:25.516 [2024-11-20 07:27:49.463099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ace50 (9): Bad file descriptor 00:30:25.516 [2024-11-20 07:27:49.463108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:30:25.516 [2024-11-20 07:27:49.463113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:30:25.516 [2024-11-20 07:27:49.463118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:25.516 [2024-11-20 07:27:49.463124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:30:25.516 [2024-11-20 07:27:49.463130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:25.516 07:27:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:30:26.449 5563.50 IOPS, 21.73 MiB/s [2024-11-20T07:27:50.652Z] [2024-11-20 07:27:50.463241] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.449 [2024-11-20 07:27:50.463281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ace50 with addr=10.0.0.2, port=4420 00:30:26.449 [2024-11-20 07:27:50.463290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ace50 is same with the state(6) to be set 00:30:26.449 [2024-11-20 07:27:50.463303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ace50 (9): Bad file descriptor 00:30:26.449 [2024-11-20 07:27:50.463314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:30:26.449 [2024-11-20 07:27:50.463319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:30:26.449 [2024-11-20 07:27:50.463324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:26.449 [2024-11-20 07:27:50.463332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:30:26.449 [2024-11-20 07:27:50.463339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:27.381 3709.00 IOPS, 14.49 MiB/s [2024-11-20T07:27:51.584Z] [2024-11-20 07:27:51.463413] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.381 [2024-11-20 07:27:51.463442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ace50 with addr=10.0.0.2, port=4420 00:30:27.381 [2024-11-20 07:27:51.463449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ace50 is same with the state(6) to be set 00:30:27.381 [2024-11-20 07:27:51.463460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ace50 (9): Bad file descriptor 00:30:27.381 [2024-11-20 07:27:51.463469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:30:27.381 [2024-11-20 07:27:51.463474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:30:27.381 [2024-11-20 07:27:51.463479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:27.381 [2024-11-20 07:27:51.463485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:30:27.381 [2024-11-20 07:27:51.463490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:28.312 2781.75 IOPS, 10.87 MiB/s [2024-11-20T07:27:52.515Z] [2024-11-20 07:27:52.466230] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.312 [2024-11-20 07:27:52.466266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ace50 with addr=10.0.0.2, port=4420 00:30:28.312 [2024-11-20 07:27:52.466273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ace50 is same with the state(6) to be set 00:30:28.312 [2024-11-20 07:27:52.466446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ace50 (9): Bad file descriptor 00:30:28.312 [2024-11-20 07:27:52.466624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:30:28.312 [2024-11-20 07:27:52.466636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:30:28.312 [2024-11-20 07:27:52.466641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:28.312 [2024-11-20 07:27:52.466648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:30:28.312 [2024-11-20 07:27:52.466654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:28.313 07:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.570 [2024-11-20 07:27:52.660425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.570 07:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 80986 00:30:29.391 2225.40 IOPS, 8.69 MiB/s [2024-11-20T07:27:53.594Z] [2024-11-20 07:27:53.497861] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:30:31.256 3711.67 IOPS, 14.50 MiB/s [2024-11-20T07:27:56.393Z] 5118.57 IOPS, 19.99 MiB/s [2024-11-20T07:27:57.766Z] 6173.75 IOPS, 24.12 MiB/s [2024-11-20T07:27:58.332Z] 6994.44 IOPS, 27.32 MiB/s [2024-11-20T07:27:58.332Z] 7650.20 IOPS, 29.88 MiB/s 00:30:34.129 Latency(us) 00:30:34.129 [2024-11-20T07:27:58.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.129 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:34.129 Verification LBA range: start 0x0 length 0x4000 00:30:34.129 NVMe0n1 : 10.01 7652.90 29.89 5245.52 0.00 9901.21 466.31 3019898.88 00:30:34.129 [2024-11-20T07:27:58.333Z] =================================================================================================================== 00:30:34.130 [2024-11-20T07:27:58.333Z] Total : 7652.90 29.89 5245.52 0.00 9901.21 0.00 3019898.88 00:30:34.388 { 00:30:34.388 "results": [ 00:30:34.388 { 00:30:34.388 "job": "NVMe0n1", 00:30:34.388 "core_mask": "0x4", 00:30:34.388 "workload": "verify", 00:30:34.388 "status": "finished", 00:30:34.388 "verify_range": { 00:30:34.388 "start": 0, 00:30:34.388 "length": 16384 00:30:34.388 }, 00:30:34.388 "queue_depth": 128, 00:30:34.388 "io_size": 4096, 00:30:34.388 "runtime": 10.005875, 00:30:34.388 "iops": 7652.903918947618, 00:30:34.388 "mibps": 29.894155933389133, 00:30:34.388 "io_failed": 52486, 00:30:34.388 "io_timeout": 0, 00:30:34.388 "avg_latency_us": 9901.20564078723, 00:30:34.388 "min_latency_us": 466.31384615384616, 00:30:34.388 "max_latency_us": 3019898.88 00:30:34.388 } 00:30:34.388 ], 00:30:34.388 "core_count": 1 00:30:34.388 } 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 80852 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 80852 ']' 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 80852 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80852 00:30:34.388 killing process with pid 80852 00:30:34.388 Received shutdown signal, test time was about 10.000000 seconds 00:30:34.388 00:30:34.388 Latency(us) 00:30:34.388 [2024-11-20T07:27:58.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.388 [2024-11-20T07:27:58.591Z] =================================================================================================================== 00:30:34.388 [2024-11-20T07:27:58.591Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80852' 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 80852 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 80852 00:30:34.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81104 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81104 /var/tmp/bdevperf.sock 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81104 ']' 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:34.388 07:27:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:30:34.388 [2024-11-20 07:27:58.507780] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:34.388 [2024-11-20 07:27:58.507842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81104 ] 00:30:34.646 [2024-11-20 07:27:58.637646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.646 [2024-11-20 07:27:58.668475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.646 [2024-11-20 07:27:58.696409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:35.221 07:27:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.222 07:27:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:30:35.222 07:27:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81104 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:30:35.222 07:27:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81116 00:30:35.222 07:27:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:30:35.479 07:27:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:30:35.736 NVMe0n1 00:30:35.736 07:27:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81157 00:30:35.736 07:27:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:35.736 07:27:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:30:35.736 Running I/O for 10 seconds... 00:30:36.668 07:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.928 19431.00 IOPS, 75.90 MiB/s [2024-11-20T07:28:01.131Z] [2024-11-20 07:28:01.003982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.928 [2024-11-20 07:28:01.004095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.929 [2024-11-20 07:28:01.004397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434c10 is same with the state(6) to be set 00:30:36.930 [2024-11-20 07:28:01.004563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.930 [2024-11-20 07:28:01.004853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-11-20 07:28:01.004857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.004991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.004996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-11-20 07:28:01.005217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.931 [2024-11-20 07:28:01.005231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.932 [2024-11-20 07:28:01.005645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.932 [2024-11-20 07:28:01.005650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.933 [2024-11-20 07:28:01.005960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.005966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca4090 is same with the state(6) to be set 00:30:36.933 [2024-11-20 07:28:01.005974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:36.933 [2024-11-20 07:28:01.005978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:36.933 [2024-11-20 07:28:01.005982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27160 len:8 PRP1 0x0 PRP2 0x0 00:30:36.933 [2024-11-20 07:28:01.005987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.933 [2024-11-20 07:28:01.006208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:36.933 [2024-11-20 07:28:01.006261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36e50 (9): Bad file descriptor 00:30:36.934 [2024-11-20 07:28:01.006325] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.934 [2024-11-20 07:28:01.006333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36e50 with addr=10.0.0.2, port=4420 00:30:36.934 [2024-11-20 07:28:01.006338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36e50 is same with the state(6) to be set 00:30:36.934 [2024-11-20 07:28:01.006347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36e50 (9): Bad file descriptor 00:30:36.934 [2024-11-20 07:28:01.006356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:30:36.934 [2024-11-20 07:28:01.006360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:30:36.934 [2024-11-20 07:28:01.006366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:36.934 [2024-11-20 07:28:01.006372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:30:36.934 [2024-11-20 07:28:01.006377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:36.934 07:28:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81157 00:30:38.797 10732.50 IOPS, 41.92 MiB/s [2024-11-20T07:28:03.257Z] 7155.00 IOPS, 27.95 MiB/s [2024-11-20T07:28:03.257Z] [2024-11-20 07:28:03.006581] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.054 [2024-11-20 07:28:03.006614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36e50 with addr=10.0.0.2, port=4420 00:30:39.054 [2024-11-20 07:28:03.006621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36e50 is same with the state(6) to be set 00:30:39.054 [2024-11-20 07:28:03.006634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36e50 (9): Bad file descriptor 00:30:39.054 [2024-11-20 07:28:03.006644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:30:39.054 [2024-11-20 07:28:03.006648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:30:39.054 [2024-11-20 07:28:03.006654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:39.054 [2024-11-20 07:28:03.006661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:30:39.054 [2024-11-20 07:28:03.006667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:41.070 5366.25 IOPS, 20.96 MiB/s [2024-11-20T07:28:05.273Z] 4293.00 IOPS, 16.77 MiB/s [2024-11-20T07:28:05.273Z] [2024-11-20 07:28:05.006915] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-11-20 07:28:05.006950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36e50 with addr=10.0.0.2, port=4420 00:30:41.070 [2024-11-20 07:28:05.006957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36e50 is same with the state(6) to be set 00:30:41.070 [2024-11-20 07:28:05.006969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36e50 (9): Bad file descriptor 00:30:41.070 [2024-11-20 07:28:05.006978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:30:41.070 [2024-11-20 07:28:05.006983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:30:41.070 [2024-11-20 07:28:05.006989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:41.070 [2024-11-20 07:28:05.006994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:30:41.070 [2024-11-20 07:28:05.007000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:42.935 3577.50 IOPS, 13.97 MiB/s [2024-11-20T07:28:07.138Z] 3066.43 IOPS, 11.98 MiB/s [2024-11-20T07:28:07.138Z] [2024-11-20 07:28:07.007177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:42.935 [2024-11-20 07:28:07.007205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:30:42.935 [2024-11-20 07:28:07.007211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:30:42.935 [2024-11-20 07:28:07.007216] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:30:42.935 [2024-11-20 07:28:07.007227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:30:43.868 2683.12 IOPS, 10.48 MiB/s 00:30:43.868 Latency(us) 00:30:43.868 [2024-11-20T07:28:08.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.868 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:30:43.868 NVMe0n1 : 8.11 2646.19 10.34 15.78 0.00 48000.92 6452.78 7020619.62 00:30:43.868 [2024-11-20T07:28:08.071Z] =================================================================================================================== 00:30:43.868 [2024-11-20T07:28:08.071Z] Total : 2646.19 10.34 15.78 0.00 48000.92 6452.78 7020619.62 00:30:43.868 { 00:30:43.868 "results": [ 00:30:43.868 { 00:30:43.868 "job": "NVMe0n1", 00:30:43.868 "core_mask": "0x4", 00:30:43.868 "workload": "randread", 00:30:43.868 "status": "finished", 00:30:43.868 "queue_depth": 128, 00:30:43.868 "io_size": 4096, 00:30:43.868 "runtime": 8.111677, 00:30:43.868 "iops": 2646.1852462813795, 00:30:43.868 "mibps": 10.336661118286639, 00:30:43.868 "io_failed": 128, 00:30:43.868 "io_timeout": 0, 00:30:43.868 "avg_latency_us": 48000.91702880919, 00:30:43.868 "min_latency_us": 6452.775384615385, 00:30:43.868 "max_latency_us": 7020619.618461538 00:30:43.868 } 00:30:43.868 ], 00:30:43.868 "core_count": 1 00:30:43.868 } 00:30:43.868 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:43.868 Attaching 5 probes... 00:30:43.868 1243.513782: reset bdev controller NVMe0 00:30:43.868 1243.597995: reconnect bdev controller NVMe0 00:30:43.868 3243.832479: reconnect delay bdev controller NVMe0 00:30:43.868 3243.844054: reconnect bdev controller NVMe0 00:30:43.868 5244.167320: reconnect delay bdev controller NVMe0 00:30:43.868 5244.177716: reconnect bdev controller NVMe0 00:30:43.868 7244.479304: reconnect delay bdev controller NVMe0 00:30:43.868 7244.489831: reconnect bdev controller NVMe0 00:30:43.868 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:30:43.868 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:30:43.868 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81116 00:30:43.868 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:43.869 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81104 00:30:43.869 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81104 ']' 00:30:43.869 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81104 00:30:43.869 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:30:43.869 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:43.869 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81104 00:30:43.869 killing process with pid 81104 00:30:43.869 Received shutdown signal, test time was about 8.166632 seconds 00:30:43.869 00:30:43.869 Latency(us) 00:30:43.869 [2024-11-20T07:28:08.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.869 [2024-11-20T07:28:08.072Z] =================================================================================================================== 00:30:43.869 [2024-11-20T07:28:08.072Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:43.869 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:43.869 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:43.869 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81104' 00:30:43.869 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81104 00:30:43.869 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81104 00:30:44.126 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@99 -- # sync 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@102 -- # set +e 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:44.385 rmmod nvme_tcp 00:30:44.385 rmmod nvme_fabrics 00:30:44.385 rmmod nvme_keyring 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@106 -- # set -e 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@107 -- # return 0 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@336 -- # '[' -n 80668 ']' 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@337 -- # killprocess 80668 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 80668 ']' 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 80668 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80668 00:30:44.385 killing process with pid 80668 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80668' 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 80668 00:30:44.385 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 80668 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@342 -- # nvmf_fini 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@254 -- # local dev 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@257 -- # remove_target_ns 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@258 -- # delete_main_bridge 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # continue 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # continue 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@41 -- # _dev=0 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@41 -- # dev_map=() 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@274 -- # iptr 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@548 -- # iptables-save 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@548 -- # iptables-restore 00:30:44.643 ************************************ 00:30:44.643 END TEST nvmf_timeout 00:30:44.643 ************************************ 00:30:44.643 00:30:44.643 real 0m45.152s 00:30:44.643 user 2m12.762s 00:30:44.643 sys 0m4.232s 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:44.643 ************************************ 00:30:44.643 END TEST nvmf_host 00:30:44.643 ************************************ 00:30:44.643 00:30:44.643 real 4m51.044s 00:30:44.643 user 12m37.802s 00:30:44.643 sys 0m51.837s 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.643 07:28:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.643 07:28:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:44.643 07:28:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:30:44.643 00:30:44.643 real 11m34.850s 00:30:44.643 user 27m59.398s 00:30:44.643 sys 2m21.545s 00:30:44.643 07:28:08 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.643 07:28:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:44.643 ************************************ 00:30:44.643 END TEST nvmf_tcp 00:30:44.643 ************************************ 00:30:44.902 07:28:08 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:30:44.902 07:28:08 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:30:44.902 07:28:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:44.902 07:28:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.902 07:28:08 -- common/autotest_common.sh@10 -- # set +x 00:30:44.902 ************************************ 00:30:44.902 START TEST nvmf_dif 00:30:44.902 ************************************ 00:30:44.902 07:28:08 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:30:44.902 * Looking for test storage... 00:30:44.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:44.902 07:28:08 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:44.902 07:28:08 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:30:44.902 07:28:08 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:44.902 07:28:08 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.902 07:28:08 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:30:44.902 07:28:09 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:30:44.902 07:28:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.902 07:28:09 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:30:44.902 07:28:09 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.902 07:28:09 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.902 07:28:09 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.902 07:28:09 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:30:44.902 07:28:09 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.902 07:28:09 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:44.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.902 --rc genhtml_branch_coverage=1 00:30:44.902 --rc genhtml_function_coverage=1 00:30:44.902 --rc genhtml_legend=1 00:30:44.902 --rc geninfo_all_blocks=1 00:30:44.902 --rc geninfo_unexecuted_blocks=1 00:30:44.902 00:30:44.902 ' 00:30:44.902 07:28:09 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:44.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.902 --rc genhtml_branch_coverage=1 00:30:44.902 --rc genhtml_function_coverage=1 00:30:44.902 --rc genhtml_legend=1 00:30:44.902 --rc geninfo_all_blocks=1 00:30:44.902 --rc geninfo_unexecuted_blocks=1 00:30:44.902 00:30:44.902 ' 00:30:44.902 07:28:09 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:44.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.902 --rc genhtml_branch_coverage=1 00:30:44.902 --rc genhtml_function_coverage=1 00:30:44.902 --rc genhtml_legend=1 00:30:44.902 --rc geninfo_all_blocks=1 00:30:44.902 --rc geninfo_unexecuted_blocks=1 00:30:44.902 00:30:44.902 ' 00:30:44.902 07:28:09 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:44.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.902 --rc genhtml_branch_coverage=1 00:30:44.902 --rc genhtml_function_coverage=1 00:30:44.902 --rc genhtml_legend=1 00:30:44.902 --rc geninfo_all_blocks=1 00:30:44.902 --rc geninfo_unexecuted_blocks=1 00:30:44.902 00:30:44.902 ' 00:30:44.902 07:28:09 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.902 07:28:09 nvmf_dif -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:44.902 07:28:09 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.902 07:28:09 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.902 07:28:09 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.902 07:28:09 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.902 07:28:09 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.903 07:28:09 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.903 07:28:09 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.903 07:28:09 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:44.903 07:28:09 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@50 -- # : 0 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:44.903 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:44.903 07:28:09 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:44.903 07:28:09 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:44.903 07:28:09 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:44.903 07:28:09 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:44.903 07:28:09 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@260 -- # remove_target_ns 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:44.903 07:28:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:30:44.903 07:28:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@280 -- # nvmf_veth_init 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@223 -- # create_target_ns 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@224 -- # create_main_bridge 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@105 -- # delete_main_bridge 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@121 -- # return 0 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:44.903 07:28:09 nvmf_dif -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@28 -- # local -g _dev 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@151 -- # set_up initiator0 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@151 -- # set_up target0 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target0 up 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@152 -- # set_up target0_br 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@61 -- # add_to_ns target0 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772161 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:30:44.903 07:28:09 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:30:44.904 10.0.0.1 00:30:44.904 07:28:09 nvmf_dif -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:30:44.904 07:28:09 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:44.904 07:28:09 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:44.904 07:28:09 nvmf_dif -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:44.904 07:28:09 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:30:44.904 07:28:09 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772162 00:30:44.904 07:28:09 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:44.904 07:28:09 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:30:44.904 07:28:09 nvmf_dif -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:30:44.904 07:28:09 nvmf_dif -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:30:45.160 10.0.0.2 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@66 -- # set_up initiator0 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:30:45.160 07:28:09 nvmf_dif -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@129 -- # set_up target0_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:30:45.161 07:28:09 nvmf_dif -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@151 -- # set_up initiator1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@151 -- # set_up target1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target1 up 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@152 -- # set_up target1_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@61 -- # add_to_ns target1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772163 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:30:45.161 10.0.0.3 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772164 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:30:45.161 10.0.0.4 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@66 -- # set_up initiator1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@129 -- # set_up target1_br 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:30:45.161 07:28:09 nvmf_dif -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@38 -- # ping_ips 2 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator0 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator0 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:45.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:30:45.161 00:30:45.161 --- 10.0.0.1 ping statistics --- 00:30:45.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.161 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@101 -- # echo target0 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target0 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:30:45.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:30:45.161 00:30:45.161 --- 10.0.0.2 ping statistics --- 00:30:45.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.161 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair++ )) 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:30:45.161 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:45.161 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:30:45.161 00:30:45.161 --- 10.0.0.3 ping statistics --- 00:30:45.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.161 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@101 -- # echo target1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:30:45.161 07:28:09 nvmf_dif -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:30:45.161 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:45.161 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:30:45.162 00:30:45.162 --- 10.0.0.4 ping statistics --- 00:30:45.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.162 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:30:45.162 07:28:09 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair++ )) 00:30:45.162 07:28:09 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:45.162 07:28:09 nvmf_dif -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.162 07:28:09 nvmf_dif -- nvmf/common.sh@281 -- # return 0 00:30:45.162 07:28:09 nvmf_dif -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:30:45.162 07:28:09 nvmf_dif -- nvmf/common.sh@299 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:45.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:45.419 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:45.419 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:45.419 07:28:09 nvmf_dif -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator0 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator0 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator1 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator1 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator1 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:30:45.419 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@101 -- # echo target0 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target0 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target1 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target1 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@101 -- # echo target1 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target1 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:30:45.677 07:28:09 nvmf_dif -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:30:45.677 ' 00:30:45.677 07:28:09 nvmf_dif -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.677 07:28:09 nvmf_dif -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:45.677 07:28:09 nvmf_dif -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:45.677 07:28:09 nvmf_dif -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.677 07:28:09 nvmf_dif -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:45.677 07:28:09 nvmf_dif -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:45.677 07:28:09 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:45.677 07:28:09 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:45.677 07:28:09 nvmf_dif -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:45.677 07:28:09 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.677 07:28:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:45.677 07:28:09 nvmf_dif -- nvmf/common.sh@328 -- # nvmfpid=81642 00:30:45.677 07:28:09 nvmf_dif -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:45.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.677 07:28:09 nvmf_dif -- nvmf/common.sh@329 -- # waitforlisten 81642 00:30:45.677 07:28:09 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 81642 ']' 00:30:45.677 07:28:09 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.677 07:28:09 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.677 07:28:09 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.677 07:28:09 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.677 07:28:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:45.677 [2024-11-20 07:28:09.707910] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:45.677 [2024-11-20 07:28:09.708067] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.677 [2024-11-20 07:28:09.840033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.677 [2024-11-20 07:28:09.873600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.677 [2024-11-20 07:28:09.873764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.677 [2024-11-20 07:28:09.873862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.677 [2024-11-20 07:28:09.873965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.677 [2024-11-20 07:28:09.873984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.677 [2024-11-20 07:28:09.874293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.935 [2024-11-20 07:28:09.903810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:46.500 07:28:10 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.500 07:28:10 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:30:46.500 07:28:10 nvmf_dif -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:46.500 07:28:10 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:46.500 07:28:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:46.500 07:28:10 nvmf_dif -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.500 07:28:10 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:46.500 07:28:10 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:46.501 07:28:10 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.501 07:28:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:46.501 [2024-11-20 07:28:10.617762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.501 07:28:10 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.501 07:28:10 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:46.501 07:28:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:46.501 07:28:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.501 07:28:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:46.501 ************************************ 00:30:46.501 START TEST fio_dif_1_default 00:30:46.501 ************************************ 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:46.501 bdev_null0 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:46.501 [2024-11-20 07:28:10.657830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # config=() 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # local subsystem config 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:46.501 { 00:30:46.501 "params": { 00:30:46.501 "name": "Nvme$subsystem", 00:30:46.501 "trtype": "$TEST_TRANSPORT", 00:30:46.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.501 "adrfam": "ipv4", 00:30:46.501 "trsvcid": "$NVMF_PORT", 00:30:46.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.501 "hdgst": ${hdgst:-false}, 00:30:46.501 "ddgst": ${ddgst:-false} 00:30:46.501 }, 00:30:46.501 "method": "bdev_nvme_attach_controller" 00:30:46.501 } 00:30:46.501 EOF 00:30:46.501 )") 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # cat 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@396 -- # jq . 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@397 -- # IFS=, 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:30:46.501 "params": { 00:30:46.501 "name": "Nvme0", 00:30:46.501 "trtype": "tcp", 00:30:46.501 "traddr": "10.0.0.2", 00:30:46.501 "adrfam": "ipv4", 00:30:46.501 "trsvcid": "4420", 00:30:46.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:46.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:46.501 "hdgst": false, 00:30:46.501 "ddgst": false 00:30:46.501 }, 00:30:46.501 "method": "bdev_nvme_attach_controller" 00:30:46.501 }' 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:46.501 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:46.759 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:46.759 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:46.759 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:46.759 07:28:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.759 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:46.759 fio-3.35 00:30:46.759 Starting 1 thread 00:30:58.955 00:30:58.955 filename0: (groupid=0, jobs=1): err= 0: pid=81703: Wed Nov 20 07:28:21 2024 00:30:58.955 read: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(452MiB/10001msec) 00:30:58.955 slat (nsec): min=5393, max=34235, avg=5700.37, stdev=614.26 00:30:58.955 clat (usec): min=268, max=2566, avg=330.00, stdev=37.80 00:30:58.955 lat (usec): min=273, max=2600, avg=335.70, stdev=37.86 00:30:58.955 clat percentiles (usec): 00:30:58.955 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 297], 00:30:58.955 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 351], 00:30:58.955 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 371], 95.00th=[ 379], 00:30:58.955 | 99.00th=[ 392], 99.50th=[ 396], 99.90th=[ 562], 99.95th=[ 603], 00:30:58.955 | 99.99th=[ 1057] 00:30:58.955 bw ( KiB/s): min=41312, max=50432, per=99.49%, avg=46076.63, stdev=4154.32, samples=19 00:30:58.955 iops : min=10328, max=12608, avg=11519.16, stdev=1038.58, samples=19 00:30:58.955 lat (usec) : 500=99.88%, 750=0.10%, 1000=0.01% 00:30:58.955 lat (msec) : 2=0.01%, 4=0.01% 00:30:58.955 cpu : usr=88.34%, sys=10.66%, ctx=126, majf=0, minf=9 00:30:58.955 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.955 issued rwts: total=115788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.955 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:58.955 00:30:58.955 Run status group 0 (all jobs): 00:30:58.955 READ: bw=45.2MiB/s (47.4MB/s), 45.2MiB/s-45.2MiB/s (47.4MB/s-47.4MB/s), io=452MiB (474MB), run=10001-10001msec 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.955 ************************************ 00:30:58.955 END TEST fio_dif_1_default 00:30:58.955 ************************************ 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.955 00:30:58.955 real 0m10.815s 00:30:58.955 user 0m9.313s 00:30:58.955 sys 0m1.249s 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.955 07:28:21 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:58.955 07:28:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:58.955 07:28:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.955 07:28:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:58.955 ************************************ 00:30:58.955 START TEST fio_dif_1_multi_subsystems 00:30:58.955 ************************************ 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.955 bdev_null0 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.955 [2024-11-20 07:28:21.512348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.955 bdev_null1 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:58.955 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # config=() 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # local subsystem config 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:58.956 { 00:30:58.956 "params": { 00:30:58.956 "name": "Nvme$subsystem", 00:30:58.956 "trtype": "$TEST_TRANSPORT", 00:30:58.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.956 "adrfam": "ipv4", 00:30:58.956 "trsvcid": "$NVMF_PORT", 00:30:58.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.956 "hdgst": ${hdgst:-false}, 00:30:58.956 "ddgst": ${ddgst:-false} 00:30:58.956 }, 00:30:58.956 "method": "bdev_nvme_attach_controller" 00:30:58.956 } 00:30:58.956 EOF 00:30:58.956 )") 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:58.956 { 00:30:58.956 "params": { 00:30:58.956 "name": "Nvme$subsystem", 00:30:58.956 "trtype": "$TEST_TRANSPORT", 00:30:58.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.956 "adrfam": "ipv4", 00:30:58.956 "trsvcid": "$NVMF_PORT", 00:30:58.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.956 "hdgst": ${hdgst:-false}, 00:30:58.956 "ddgst": ${ddgst:-false} 00:30:58.956 }, 00:30:58.956 "method": "bdev_nvme_attach_controller" 00:30:58.956 } 00:30:58.956 EOF 00:30:58.956 )") 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@396 -- # jq . 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@397 -- # IFS=, 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:30:58.956 "params": { 00:30:58.956 "name": "Nvme0", 00:30:58.956 "trtype": "tcp", 00:30:58.956 "traddr": "10.0.0.2", 00:30:58.956 "adrfam": "ipv4", 00:30:58.956 "trsvcid": "4420", 00:30:58.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:58.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:58.956 "hdgst": false, 00:30:58.956 "ddgst": false 00:30:58.956 }, 00:30:58.956 "method": "bdev_nvme_attach_controller" 00:30:58.956 },{ 00:30:58.956 "params": { 00:30:58.956 "name": "Nvme1", 00:30:58.956 "trtype": "tcp", 00:30:58.956 "traddr": "10.0.0.2", 00:30:58.956 "adrfam": "ipv4", 00:30:58.956 "trsvcid": "4420", 00:30:58.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:58.956 "hdgst": false, 00:30:58.956 "ddgst": false 00:30:58.956 }, 00:30:58.956 "method": "bdev_nvme_attach_controller" 00:30:58.956 }' 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:58.956 07:28:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.956 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:58.956 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:58.956 fio-3.35 00:30:58.956 Starting 2 threads 00:31:08.921 00:31:08.921 filename0: (groupid=0, jobs=1): err= 0: pid=81868: Wed Nov 20 07:28:32 2024 00:31:08.921 read: IOPS=6766, BW=26.4MiB/s (27.7MB/s)(264MiB/10001msec) 00:31:08.921 slat (usec): min=5, max=147, avg= 8.38, stdev= 4.26 00:31:08.921 clat (usec): min=309, max=2144, avg=568.66, stdev=37.00 00:31:08.921 lat (usec): min=317, max=2153, avg=577.04, stdev=37.94 00:31:08.921 clat percentiles (usec): 00:31:08.921 | 1.00th=[ 498], 5.00th=[ 515], 10.00th=[ 529], 20.00th=[ 545], 00:31:08.921 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:31:08.921 | 70.00th=[ 578], 80.00th=[ 594], 90.00th=[ 603], 95.00th=[ 619], 00:31:08.921 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 742], 99.95th=[ 889], 00:31:08.921 | 99.99th=[ 2008] 00:31:08.921 bw ( KiB/s): min=25843, max=27776, per=49.93%, avg=27047.37, stdev=624.23, samples=19 00:31:08.921 iops : min= 6460, max= 6944, avg=6761.79, stdev=156.12, samples=19 00:31:08.921 lat (usec) : 500=1.21%, 750=98.70%, 1000=0.06% 00:31:08.921 lat (msec) : 2=0.03%, 4=0.01% 00:31:08.921 cpu : usr=90.47%, sys=8.40%, ctx=150, majf=0, minf=0 00:31:08.921 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:08.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.921 issued rwts: total=67672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.921 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:08.921 filename1: (groupid=0, jobs=1): err= 0: pid=81869: Wed Nov 20 07:28:32 2024 00:31:08.921 read: IOPS=6774, BW=26.5MiB/s (27.8MB/s)(265MiB/10001msec) 00:31:08.921 slat (nsec): min=4662, max=44623, avg=7941.46, stdev=3906.99 00:31:08.921 clat (usec): min=292, max=2391, avg=568.96, stdev=29.31 00:31:08.921 lat (usec): min=298, max=2406, avg=576.91, stdev=30.28 00:31:08.921 clat percentiles (usec): 00:31:08.921 | 1.00th=[ 529], 5.00th=[ 537], 10.00th=[ 545], 20.00th=[ 553], 00:31:08.921 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 562], 60.00th=[ 570], 00:31:08.921 | 70.00th=[ 578], 80.00th=[ 586], 90.00th=[ 603], 95.00th=[ 619], 00:31:08.921 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 693], 99.95th=[ 709], 00:31:08.921 | 99.99th=[ 783] 00:31:08.921 bw ( KiB/s): min=25824, max=28064, per=49.99%, avg=27080.05, stdev=646.70, samples=19 00:31:08.921 iops : min= 6456, max= 7016, avg=6770.00, stdev=161.66, samples=19 00:31:08.921 lat (usec) : 500=0.12%, 750=99.86%, 1000=0.01% 00:31:08.921 lat (msec) : 4=0.01% 00:31:08.921 cpu : usr=90.02%, sys=9.20%, ctx=11, majf=0, minf=0 00:31:08.921 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:08.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.921 issued rwts: total=67756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.921 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:08.921 00:31:08.921 Run status group 0 (all jobs): 00:31:08.921 READ: bw=52.9MiB/s (55.5MB/s), 26.4MiB/s-26.5MiB/s (27.7MB/s-27.8MB/s), io=529MiB (555MB), run=10001-10001msec 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:08.921 ************************************ 00:31:08.921 END TEST fio_dif_1_multi_subsystems 00:31:08.921 ************************************ 00:31:08.921 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.922 00:31:08.922 real 0m10.937s 00:31:08.922 user 0m18.647s 00:31:08.922 sys 0m1.940s 00:31:08.922 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:08.922 07:28:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:08.922 07:28:32 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:08.922 07:28:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:08.922 07:28:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.922 07:28:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:08.922 ************************************ 00:31:08.922 START TEST fio_dif_rand_params 00:31:08.922 ************************************ 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.922 bdev_null0 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.922 [2024-11-20 07:28:32.492123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:08.922 { 00:31:08.922 "params": { 00:31:08.922 "name": "Nvme$subsystem", 00:31:08.922 "trtype": "$TEST_TRANSPORT", 00:31:08.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:08.922 "adrfam": "ipv4", 00:31:08.922 "trsvcid": "$NVMF_PORT", 00:31:08.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:08.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:08.922 "hdgst": ${hdgst:-false}, 00:31:08.922 "ddgst": ${ddgst:-false} 00:31:08.922 }, 00:31:08.922 "method": "bdev_nvme_attach_controller" 00:31:08.922 } 00:31:08.922 EOF 00:31:08.922 )") 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:31:08.922 "params": { 00:31:08.922 "name": "Nvme0", 00:31:08.922 "trtype": "tcp", 00:31:08.922 "traddr": "10.0.0.2", 00:31:08.922 "adrfam": "ipv4", 00:31:08.922 "trsvcid": "4420", 00:31:08.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:08.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:08.922 "hdgst": false, 00:31:08.922 "ddgst": false 00:31:08.922 }, 00:31:08.922 "method": "bdev_nvme_attach_controller" 00:31:08.922 }' 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:08.922 07:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:08.922 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:08.922 ... 00:31:08.922 fio-3.35 00:31:08.922 Starting 3 threads 00:31:14.186 00:31:14.186 filename0: (groupid=0, jobs=1): err= 0: pid=82029: Wed Nov 20 07:28:38 2024 00:31:14.186 read: IOPS=345, BW=43.2MiB/s (45.3MB/s)(216MiB/5003msec) 00:31:14.186 slat (nsec): min=3865, max=24182, avg=7734.11, stdev=1680.79 00:31:14.186 clat (usec): min=3230, max=9524, avg=8667.43, stdev=229.86 00:31:14.186 lat (usec): min=3236, max=9533, avg=8675.16, stdev=229.93 00:31:14.186 clat percentiles (usec): 00:31:14.186 | 1.00th=[ 8586], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[ 8717], 00:31:14.186 | 30.00th=[ 8717], 40.00th=[ 8717], 50.00th=[ 8717], 60.00th=[ 8717], 00:31:14.186 | 70.00th=[ 8717], 80.00th=[ 8717], 90.00th=[ 8717], 95.00th=[ 8717], 00:31:14.186 | 99.00th=[ 8717], 99.50th=[ 8717], 99.90th=[ 9503], 99.95th=[ 9503], 00:31:14.186 | 99.99th=[ 9503] 00:31:14.186 bw ( KiB/s): min=43776, max=44544, per=33.33%, avg=44160.00, stdev=404.77, samples=10 00:31:14.186 iops : min= 342, max= 348, avg=345.00, stdev= 3.16, samples=10 00:31:14.186 lat (msec) : 4=0.17%, 10=99.83% 00:31:14.186 cpu : usr=93.86%, sys=5.66%, ctx=50, majf=0, minf=0 00:31:14.186 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.186 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.186 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:14.186 filename0: (groupid=0, jobs=1): err= 0: pid=82030: Wed Nov 20 07:28:38 2024 00:31:14.186 read: IOPS=345, BW=43.2MiB/s (45.3MB/s)(216MiB/5005msec) 00:31:14.186 slat (nsec): min=5543, max=51594, avg=6268.00, stdev=2035.20 00:31:14.186 clat (usec): min=4152, max=9704, avg=8674.63, stdev=196.70 00:31:14.186 lat (usec): min=4164, max=9730, avg=8680.90, stdev=196.36 00:31:14.186 clat percentiles (usec): 00:31:14.186 | 1.00th=[ 8586], 5.00th=[ 8717], 10.00th=[ 8717], 20.00th=[ 8717], 00:31:14.186 | 30.00th=[ 8717], 40.00th=[ 8717], 50.00th=[ 8717], 60.00th=[ 8717], 00:31:14.186 | 70.00th=[ 8717], 80.00th=[ 8717], 90.00th=[ 8717], 95.00th=[ 8717], 00:31:14.186 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[ 9634], 99.95th=[ 9765], 00:31:14.186 | 99.99th=[ 9765] 00:31:14.186 bw ( KiB/s): min=43776, max=44544, per=33.33%, avg=44160.00, stdev=404.77, samples=10 00:31:14.186 iops : min= 342, max= 348, avg=345.00, stdev= 3.16, samples=10 00:31:14.186 lat (msec) : 10=100.00% 00:31:14.186 cpu : usr=92.19%, sys=7.41%, ctx=8, majf=0, minf=0 00:31:14.186 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.186 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.186 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:14.186 filename0: (groupid=0, jobs=1): err= 0: pid=82031: Wed Nov 20 07:28:38 2024 00:31:14.186 read: IOPS=344, BW=43.1MiB/s (45.2MB/s)(216MiB/5001msec) 00:31:14.186 slat (nsec): min=3833, max=16422, avg=7393.91, stdev=1244.94 00:31:14.186 clat (usec): min=7514, max=11122, avg=8679.64, stdev=119.31 00:31:14.186 lat (usec): min=7519, max=11133, avg=8687.04, stdev=119.30 00:31:14.186 clat percentiles (usec): 00:31:14.186 | 1.00th=[ 8586], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[ 8717], 00:31:14.186 | 30.00th=[ 8717], 40.00th=[ 8717], 50.00th=[ 8717], 60.00th=[ 8717], 00:31:14.186 | 70.00th=[ 8717], 80.00th=[ 8717], 90.00th=[ 8717], 95.00th=[ 8717], 00:31:14.186 | 99.00th=[ 8717], 99.50th=[ 8717], 99.90th=[11076], 99.95th=[11076], 00:31:14.186 | 99.99th=[11076] 00:31:14.186 bw ( KiB/s): min=43776, max=44544, per=33.30%, avg=44117.33, stdev=404.77, samples=9 00:31:14.186 iops : min= 342, max= 348, avg=344.67, stdev= 3.16, samples=9 00:31:14.186 lat (msec) : 10=99.83%, 20=0.17% 00:31:14.187 cpu : usr=92.62%, sys=6.94%, ctx=10, majf=0, minf=0 00:31:14.187 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.187 issued rwts: total=1725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.187 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:14.187 00:31:14.187 Run status group 0 (all jobs): 00:31:14.187 READ: bw=129MiB/s (136MB/s), 43.1MiB/s-43.2MiB/s (45.2MB/s-45.3MB/s), io=648MiB (679MB), run=5001-5005msec 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 bdev_null0 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 [2024-11-20 07:28:38.316788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 bdev_null1 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 bdev_null2 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.187 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.445 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:14.446 { 00:31:14.446 "params": { 00:31:14.446 "name": "Nvme$subsystem", 00:31:14.446 "trtype": "$TEST_TRANSPORT", 00:31:14.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.446 "adrfam": "ipv4", 00:31:14.446 "trsvcid": "$NVMF_PORT", 00:31:14.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.446 "hdgst": ${hdgst:-false}, 00:31:14.446 "ddgst": ${ddgst:-false} 00:31:14.446 }, 00:31:14.446 "method": "bdev_nvme_attach_controller" 00:31:14.446 } 00:31:14.446 EOF 00:31:14.446 )") 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:14.446 { 00:31:14.446 "params": { 00:31:14.446 "name": "Nvme$subsystem", 00:31:14.446 "trtype": "$TEST_TRANSPORT", 00:31:14.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.446 "adrfam": "ipv4", 00:31:14.446 "trsvcid": "$NVMF_PORT", 00:31:14.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.446 "hdgst": ${hdgst:-false}, 00:31:14.446 "ddgst": ${ddgst:-false} 00:31:14.446 }, 00:31:14.446 "method": "bdev_nvme_attach_controller" 00:31:14.446 } 00:31:14.446 EOF 00:31:14.446 )") 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:14.446 { 00:31:14.446 "params": { 00:31:14.446 "name": "Nvme$subsystem", 00:31:14.446 "trtype": "$TEST_TRANSPORT", 00:31:14.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.446 "adrfam": "ipv4", 00:31:14.446 "trsvcid": "$NVMF_PORT", 00:31:14.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.446 "hdgst": ${hdgst:-false}, 00:31:14.446 "ddgst": ${ddgst:-false} 00:31:14.446 }, 00:31:14.446 "method": "bdev_nvme_attach_controller" 00:31:14.446 } 00:31:14.446 EOF 00:31:14.446 )") 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:31:14.446 "params": { 00:31:14.446 "name": "Nvme0", 00:31:14.446 "trtype": "tcp", 00:31:14.446 "traddr": "10.0.0.2", 00:31:14.446 "adrfam": "ipv4", 00:31:14.446 "trsvcid": "4420", 00:31:14.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.446 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:14.446 "hdgst": false, 00:31:14.446 "ddgst": false 00:31:14.446 }, 00:31:14.446 "method": "bdev_nvme_attach_controller" 00:31:14.446 },{ 00:31:14.446 "params": { 00:31:14.446 "name": "Nvme1", 00:31:14.446 "trtype": "tcp", 00:31:14.446 "traddr": "10.0.0.2", 00:31:14.446 "adrfam": "ipv4", 00:31:14.446 "trsvcid": "4420", 00:31:14.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:14.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:14.446 "hdgst": false, 00:31:14.446 "ddgst": false 00:31:14.446 }, 00:31:14.446 "method": "bdev_nvme_attach_controller" 00:31:14.446 },{ 00:31:14.446 "params": { 00:31:14.446 "name": "Nvme2", 00:31:14.446 "trtype": "tcp", 00:31:14.446 "traddr": "10.0.0.2", 00:31:14.446 "adrfam": "ipv4", 00:31:14.446 "trsvcid": "4420", 00:31:14.446 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:14.446 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:14.446 "hdgst": false, 00:31:14.446 "ddgst": false 00:31:14.446 }, 00:31:14.446 "method": "bdev_nvme_attach_controller" 00:31:14.446 }' 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:14.446 07:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.446 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:14.446 ... 00:31:14.446 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:14.446 ... 00:31:14.446 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:14.446 ... 00:31:14.446 fio-3.35 00:31:14.446 Starting 24 threads 00:31:26.634 00:31:26.634 filename0: (groupid=0, jobs=1): err= 0: pid=82127: Wed Nov 20 07:28:49 2024 00:31:26.634 read: IOPS=246, BW=985KiB/s (1008kB/s)(9900KiB/10054msec) 00:31:26.634 slat (usec): min=5, max=8017, avg=16.71, stdev=236.37 00:31:26.634 clat (usec): min=1173, max=119943, avg=64835.23, stdev=23387.85 00:31:26.634 lat (usec): min=1180, max=119949, avg=64851.94, stdev=23385.46 00:31:26.634 clat percentiles (usec): 00:31:26.634 | 1.00th=[ 1287], 5.00th=[ 12387], 10.00th=[ 35914], 20.00th=[ 47973], 00:31:26.634 | 30.00th=[ 54264], 40.00th=[ 60031], 50.00th=[ 71828], 60.00th=[ 74974], 00:31:26.634 | 70.00th=[ 82314], 80.00th=[ 84411], 90.00th=[ 88605], 95.00th=[ 93848], 00:31:26.634 | 99.00th=[ 98042], 99.50th=[107480], 99.90th=[116917], 99.95th=[119014], 00:31:26.634 | 99.99th=[120062] 00:31:26.634 bw ( KiB/s): min= 784, max= 2688, per=4.33%, avg=983.60, stdev=419.81, samples=20 00:31:26.634 iops : min= 196, max= 672, avg=245.90, stdev=104.95, samples=20 00:31:26.634 lat (msec) : 2=2.51%, 4=0.08%, 10=1.13%, 20=3.23%, 50=18.46% 00:31:26.634 lat (msec) : 100=73.78%, 250=0.81% 00:31:26.634 cpu : usr=43.91%, sys=1.17%, ctx=1721, majf=0, minf=9 00:31:26.634 IO depths : 1=0.2%, 2=1.5%, 4=5.3%, 8=76.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:31:26.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.634 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.634 issued rwts: total=2475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.634 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.634 filename0: (groupid=0, jobs=1): err= 0: pid=82128: Wed Nov 20 07:28:49 2024 00:31:26.634 read: IOPS=235, BW=942KiB/s (965kB/s)(9460KiB/10040msec) 00:31:26.634 slat (usec): min=3, max=8012, avg=11.80, stdev=164.68 00:31:26.634 clat (msec): min=9, max=120, avg=67.80, stdev=20.66 00:31:26.634 lat (msec): min=9, max=120, avg=67.82, stdev=20.66 00:31:26.634 clat percentiles (msec): 00:31:26.634 | 1.00th=[ 13], 5.00th=[ 24], 10.00th=[ 39], 20.00th=[ 48], 00:31:26.634 | 30.00th=[ 59], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 83], 00:31:26.634 | 70.00th=[ 85], 80.00th=[ 85], 90.00th=[ 88], 95.00th=[ 95], 00:31:26.634 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 121], 99.95th=[ 121], 00:31:26.634 | 99.99th=[ 121] 00:31:26.634 bw ( KiB/s): min= 760, max= 1880, per=4.13%, avg=939.35, stdev=284.15, samples=20 00:31:26.634 iops : min= 190, max= 470, avg=234.80, stdev=71.06, samples=20 00:31:26.634 lat (msec) : 10=0.59%, 20=1.01%, 50=23.17%, 100=74.76%, 250=0.47% 00:31:26.634 cpu : usr=32.24%, sys=1.06%, ctx=882, majf=0, minf=9 00:31:26.634 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=79.9%, 16=17.0%, 32=0.0%, >=64=0.0% 00:31:26.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.634 complete : 0=0.0%, 4=88.8%, 8=10.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.634 issued rwts: total=2365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.634 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.634 filename0: (groupid=0, jobs=1): err= 0: pid=82129: Wed Nov 20 07:28:49 2024 00:31:26.634 read: IOPS=228, BW=913KiB/s (935kB/s)(9148KiB/10018msec) 00:31:26.634 slat (usec): min=3, max=8025, avg=17.43, stdev=237.01 00:31:26.634 clat (msec): min=14, max=120, avg=69.97, stdev=18.76 00:31:26.634 lat (msec): min=14, max=120, avg=69.99, stdev=18.75 00:31:26.634 clat percentiles (msec): 00:31:26.634 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 51], 00:31:26.635 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 82], 00:31:26.635 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 96], 00:31:26.635 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 00:31:26.635 | 99.99th=[ 121] 00:31:26.635 bw ( KiB/s): min= 704, max= 1539, per=4.00%, avg=908.55, stdev=191.23, samples=20 00:31:26.635 iops : min= 176, max= 384, avg=227.10, stdev=47.68, samples=20 00:31:26.635 lat (msec) : 20=0.09%, 50=19.90%, 100=78.14%, 250=1.88% 00:31:26.635 cpu : usr=33.47%, sys=1.15%, ctx=968, majf=0, minf=9 00:31:26.635 IO depths : 1=0.1%, 2=2.1%, 4=8.2%, 8=74.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:31:26.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.635 complete : 0=0.0%, 4=89.7%, 8=8.5%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.635 issued rwts: total=2287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.635 filename0: (groupid=0, jobs=1): err= 0: pid=82130: Wed Nov 20 07:28:49 2024 00:31:26.635 read: IOPS=235, BW=941KiB/s (964kB/s)(9440KiB/10031msec) 00:31:26.635 slat (usec): min=5, max=8029, avg=27.45, stdev=303.41 00:31:26.635 clat (msec): min=23, max=115, avg=67.80, stdev=17.62 00:31:26.635 lat (msec): min=23, max=115, avg=67.82, stdev=17.62 00:31:26.635 clat percentiles (msec): 00:31:26.635 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 51], 00:31:26.635 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 75], 00:31:26.635 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 88], 95.00th=[ 93], 00:31:26.635 | 99.00th=[ 101], 99.50th=[ 103], 99.90th=[ 108], 99.95th=[ 109], 00:31:26.635 | 99.99th=[ 116] 00:31:26.635 bw ( KiB/s): min= 784, max= 1552, per=4.13%, avg=940.00, stdev=168.46, samples=20 00:31:26.635 iops : min= 196, max= 388, avg=235.00, stdev=42.12, samples=20 00:31:26.635 lat (msec) : 50=19.75%, 100=79.24%, 250=1.02% 00:31:26.635 cpu : usr=42.00%, sys=1.27%, ctx=1352, majf=0, minf=9 00:31:26.635 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:31:26.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.635 complete : 0=0.0%, 4=88.6%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.635 issued rwts: total=2360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.635 filename0: (groupid=0, jobs=1): err= 0: pid=82131: Wed Nov 20 07:28:49 2024 00:31:26.635 read: IOPS=241, BW=965KiB/s (988kB/s)(9668KiB/10019msec) 00:31:26.635 slat (usec): min=3, max=8016, avg=14.86, stdev=182.41 00:31:26.635 clat (msec): min=15, max=122, avg=66.24, stdev=19.60 00:31:26.635 lat (msec): min=15, max=122, avg=66.25, stdev=19.59 00:31:26.635 clat percentiles (msec): 00:31:26.635 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 48], 00:31:26.635 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 74], 00:31:26.635 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 87], 95.00th=[ 94], 00:31:26.635 | 99.00th=[ 97], 99.50th=[ 107], 99.90th=[ 121], 99.95th=[ 122], 00:31:26.635 | 99.99th=[ 123] 00:31:26.635 bw ( KiB/s): min= 760, max= 1712, per=4.23%, avg=960.40, stdev=242.78, samples=20 00:31:26.635 iops : min= 190, max= 428, avg=240.10, stdev=60.70, samples=20 00:31:26.635 lat (msec) : 20=0.08%, 50=26.98%, 100=72.28%, 250=0.66% 00:31:26.635 cpu : usr=36.43%, sys=0.99%, ctx=1020, majf=0, minf=9 00:31:26.635 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:31:26.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.635 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.635 issued rwts: total=2417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.635 filename0: (groupid=0, jobs=1): err= 0: pid=82132: Wed Nov 20 07:28:49 2024 00:31:26.635 read: IOPS=238, BW=955KiB/s (977kB/s)(9564KiB/10019msec) 00:31:26.635 slat (usec): min=2, max=8018, avg=19.70, stdev=283.53 00:31:26.635 clat (msec): min=12, max=106, avg=66.91, stdev=17.66 00:31:26.635 lat (msec): min=12, max=106, avg=66.93, stdev=17.65 00:31:26.635 clat percentiles (msec): 00:31:26.635 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 48], 00:31:26.635 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:31:26.635 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 86], 95.00th=[ 95], 00:31:26.635 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 100], 99.95th=[ 100], 00:31:26.635 | 99.99th=[ 107] 00:31:26.635 bw ( KiB/s): min= 840, max= 1408, per=4.18%, avg=950.00, stdev=148.60, samples=20 00:31:26.635 iops : min= 210, max= 352, avg=237.50, stdev=37.15, samples=20 00:31:26.635 lat (msec) : 20=0.08%, 50=22.71%, 100=77.16%, 250=0.04% 00:31:26.635 cpu : usr=32.15%, sys=1.05%, ctx=875, majf=0, minf=9 00:31:26.635 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:31:26.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.635 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.635 issued rwts: total=2391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.635 filename0: (groupid=0, jobs=1): err= 0: pid=82133: Wed Nov 20 07:28:49 2024 00:31:26.635 read: IOPS=241, BW=967KiB/s (991kB/s)(9712KiB/10039msec) 00:31:26.635 slat (usec): min=3, max=4020, avg=13.15, stdev=115.19 00:31:26.635 clat (msec): min=10, max=110, avg=66.02, stdev=19.10 00:31:26.635 lat (msec): min=10, max=110, avg=66.03, stdev=19.10 00:31:26.635 clat percentiles (msec): 00:31:26.635 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 44], 20.00th=[ 49], 00:31:26.635 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 75], 00:31:26.635 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 88], 95.00th=[ 93], 00:31:26.635 | 99.00th=[ 97], 99.50th=[ 103], 99.90th=[ 108], 99.95th=[ 110], 00:31:26.635 | 99.99th=[ 111] 00:31:26.635 bw ( KiB/s): min= 808, max= 1648, per=4.25%, avg=966.90, stdev=222.73, samples=20 00:31:26.635 iops : min= 202, max= 412, avg=241.70, stdev=55.70, samples=20 00:31:26.635 lat (msec) : 20=1.52%, 50=22.03%, 100=75.91%, 250=0.54% 00:31:26.635 cpu : usr=41.75%, sys=1.42%, ctx=1502, majf=0, minf=9 00:31:26.635 IO depths : 1=0.2%, 2=0.8%, 4=2.6%, 8=80.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:31:26.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.635 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.635 issued rwts: total=2428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.635 filename0: (groupid=0, jobs=1): err= 0: pid=82134: Wed Nov 20 07:28:49 2024 00:31:26.635 read: IOPS=234, BW=937KiB/s (959kB/s)(9396KiB/10029msec) 00:31:26.635 slat (usec): min=4, max=3040, avg=15.90, stdev=81.51 00:31:26.635 clat (msec): min=12, max=118, avg=68.21, stdev=18.80 00:31:26.635 lat (msec): min=12, max=118, avg=68.22, stdev=18.80 00:31:26.635 clat percentiles (msec): 00:31:26.635 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 51], 00:31:26.635 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 77], 00:31:26.635 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 92], 95.00th=[ 96], 00:31:26.635 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 116], 99.95th=[ 118], 00:31:26.635 | 99.99th=[ 118] 00:31:26.635 bw ( KiB/s): min= 672, max= 1536, per=4.11%, avg=933.20, stdev=191.18, samples=20 00:31:26.635 iops : min= 168, max= 384, avg=233.30, stdev=47.79, samples=20 00:31:26.635 lat (msec) : 20=0.09%, 50=19.63%, 100=77.61%, 250=2.68% 00:31:26.635 cpu : usr=45.48%, sys=1.21%, ctx=2043, majf=0, minf=9 00:31:26.635 IO depths : 1=0.2%, 2=2.3%, 4=9.1%, 8=73.7%, 16=14.7%, 32=0.0%, >=64=0.0% 00:31:26.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.635 complete : 0=0.0%, 4=89.6%, 8=8.3%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.635 issued rwts: total=2349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.635 filename1: (groupid=0, jobs=1): err= 0: pid=82135: Wed Nov 20 07:28:49 2024 00:31:26.635 read: IOPS=237, BW=949KiB/s (972kB/s)(9528KiB/10040msec) 00:31:26.635 slat (usec): min=3, max=8013, avg=13.77, stdev=164.14 00:31:26.635 clat (msec): min=9, max=113, avg=67.29, stdev=20.41 00:31:26.635 lat (msec): min=9, max=113, avg=67.31, stdev=20.41 00:31:26.635 clat percentiles (msec): 00:31:26.635 | 1.00th=[ 11], 5.00th=[ 22], 10.00th=[ 44], 20.00th=[ 50], 00:31:26.635 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 80], 00:31:26.636 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 87], 95.00th=[ 95], 00:31:26.636 | 99.00th=[ 96], 99.50th=[ 105], 99.90th=[ 109], 99.95th=[ 112], 00:31:26.636 | 99.99th=[ 114] 00:31:26.636 bw ( KiB/s): min= 731, max= 2048, per=4.17%, avg=948.55, stdev=283.36, samples=20 00:31:26.636 iops : min= 182, max= 512, avg=237.10, stdev=70.87, samples=20 00:31:26.636 lat (msec) : 10=0.59%, 20=2.85%, 50=18.47%, 100=77.50%, 250=0.59% 00:31:26.636 cpu : usr=37.88%, sys=0.94%, ctx=1181, majf=0, minf=0 00:31:26.636 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=76.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:31:26.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 complete : 0=0.0%, 4=89.3%, 8=9.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 issued rwts: total=2382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.636 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.636 filename1: (groupid=0, jobs=1): err= 0: pid=82136: Wed Nov 20 07:28:49 2024 00:31:26.636 read: IOPS=221, BW=885KiB/s (906kB/s)(8880KiB/10039msec) 00:31:26.636 slat (nsec): min=2798, max=46872, avg=10367.12, stdev=6791.85 00:31:26.636 clat (msec): min=8, max=119, avg=72.22, stdev=18.67 00:31:26.636 lat (msec): min=8, max=119, avg=72.23, stdev=18.67 00:31:26.636 clat percentiles (msec): 00:31:26.636 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 47], 20.00th=[ 61], 00:31:26.636 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 85], 00:31:26.636 | 70.00th=[ 85], 80.00th=[ 85], 90.00th=[ 88], 95.00th=[ 96], 00:31:26.636 | 99.00th=[ 96], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 121], 00:31:26.636 | 99.99th=[ 121] 00:31:26.636 bw ( KiB/s): min= 736, max= 1763, per=3.89%, avg=883.90, stdev=234.16, samples=20 00:31:26.636 iops : min= 184, max= 440, avg=220.90, stdev=58.40, samples=20 00:31:26.636 lat (msec) : 10=0.63%, 20=1.35%, 50=11.76%, 100=85.45%, 250=0.81% 00:31:26.636 cpu : usr=32.01%, sys=1.05%, ctx=881, majf=0, minf=9 00:31:26.636 IO depths : 1=0.1%, 2=1.4%, 4=5.0%, 8=76.6%, 16=16.8%, 32=0.0%, >=64=0.0% 00:31:26.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 complete : 0=0.0%, 4=89.8%, 8=9.1%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.636 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.636 filename1: (groupid=0, jobs=1): err= 0: pid=82137: Wed Nov 20 07:28:49 2024 00:31:26.636 read: IOPS=241, BW=967KiB/s (990kB/s)(9676KiB/10006msec) 00:31:26.636 slat (usec): min=4, max=8015, avg=27.06, stdev=264.23 00:31:26.636 clat (msec): min=10, max=111, avg=66.05, stdev=17.71 00:31:26.636 lat (msec): min=10, max=111, avg=66.07, stdev=17.71 00:31:26.636 clat percentiles (msec): 00:31:26.636 | 1.00th=[ 22], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 49], 00:31:26.636 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 72], 00:31:26.636 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 94], 00:31:26.636 | 99.00th=[ 96], 99.50th=[ 101], 99.90th=[ 111], 99.95th=[ 111], 00:31:26.636 | 99.99th=[ 111] 00:31:26.636 bw ( KiB/s): min= 784, max= 1651, per=4.23%, avg=961.20, stdev=176.67, samples=20 00:31:26.636 iops : min= 196, max= 412, avg=240.25, stdev=44.01, samples=20 00:31:26.636 lat (msec) : 20=0.95%, 50=22.61%, 100=75.78%, 250=0.66% 00:31:26.636 cpu : usr=46.55%, sys=1.23%, ctx=1335, majf=0, minf=9 00:31:26.636 IO depths : 1=0.1%, 2=1.6%, 4=6.5%, 8=76.8%, 16=15.0%, 32=0.0%, >=64=0.0% 00:31:26.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 complete : 0=0.0%, 4=88.8%, 8=9.8%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 issued rwts: total=2419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.636 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.636 filename1: (groupid=0, jobs=1): err= 0: pid=82138: Wed Nov 20 07:28:49 2024 00:31:26.636 read: IOPS=237, BW=949KiB/s (972kB/s)(9500KiB/10007msec) 00:31:26.636 slat (nsec): min=2971, max=38083, avg=9497.73, stdev=5422.32 00:31:26.636 clat (msec): min=23, max=108, avg=67.36, stdev=17.31 00:31:26.636 lat (msec): min=23, max=108, avg=67.37, stdev=17.31 00:31:26.636 clat percentiles (msec): 00:31:26.636 | 1.00th=[ 31], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 48], 00:31:26.636 | 30.00th=[ 61], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:31:26.636 | 70.00th=[ 85], 80.00th=[ 85], 90.00th=[ 85], 95.00th=[ 96], 00:31:26.636 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 108], 00:31:26.636 | 99.99th=[ 109] 00:31:26.636 bw ( KiB/s): min= 840, max= 1392, per=4.16%, avg=944.65, stdev=148.84, samples=20 00:31:26.636 iops : min= 210, max= 348, avg=236.15, stdev=37.19, samples=20 00:31:26.636 lat (msec) : 50=24.51%, 100=75.24%, 250=0.25% 00:31:26.636 cpu : usr=33.67%, sys=1.14%, ctx=897, majf=0, minf=9 00:31:26.636 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:31:26.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 issued rwts: total=2375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.636 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.636 filename1: (groupid=0, jobs=1): err= 0: pid=82139: Wed Nov 20 07:28:49 2024 00:31:26.636 read: IOPS=236, BW=945KiB/s (968kB/s)(9464KiB/10015msec) 00:31:26.636 slat (usec): min=3, max=15024, avg=23.78, stdev=359.54 00:31:26.636 clat (msec): min=13, max=117, avg=67.59, stdev=18.00 00:31:26.636 lat (msec): min=13, max=117, avg=67.62, stdev=18.00 00:31:26.636 clat percentiles (msec): 00:31:26.636 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 49], 00:31:26.636 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 73], 00:31:26.636 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 87], 95.00th=[ 95], 00:31:26.636 | 99.00th=[ 96], 99.50th=[ 102], 99.90th=[ 109], 99.95th=[ 109], 00:31:26.636 | 99.99th=[ 118] 00:31:26.636 bw ( KiB/s): min= 792, max= 1536, per=4.14%, avg=940.00, stdev=169.26, samples=20 00:31:26.636 iops : min= 198, max= 384, avg=235.00, stdev=42.31, samples=20 00:31:26.636 lat (msec) : 20=0.08%, 50=23.16%, 100=75.99%, 250=0.76% 00:31:26.636 cpu : usr=36.62%, sys=1.12%, ctx=1043, majf=0, minf=9 00:31:26.636 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=77.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:31:26.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 complete : 0=0.0%, 4=88.8%, 8=10.1%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.636 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.636 filename1: (groupid=0, jobs=1): err= 0: pid=82140: Wed Nov 20 07:28:49 2024 00:31:26.636 read: IOPS=231, BW=927KiB/s (949kB/s)(9288KiB/10021msec) 00:31:26.636 slat (usec): min=3, max=8014, avg=21.27, stdev=245.26 00:31:26.636 clat (msec): min=13, max=120, avg=68.91, stdev=20.06 00:31:26.636 lat (msec): min=13, max=120, avg=68.93, stdev=20.06 00:31:26.636 clat percentiles (msec): 00:31:26.636 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 50], 00:31:26.636 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 79], 00:31:26.636 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 94], 95.00th=[ 97], 00:31:26.636 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 121], 99.95th=[ 122], 00:31:26.636 | 99.99th=[ 122] 00:31:26.636 bw ( KiB/s): min= 640, max= 1648, per=4.06%, avg=922.40, stdev=226.36, samples=20 00:31:26.636 iops : min= 160, max= 412, avg=230.60, stdev=56.59, samples=20 00:31:26.636 lat (msec) : 20=0.09%, 50=21.71%, 100=73.77%, 250=4.44% 00:31:26.636 cpu : usr=43.04%, sys=1.16%, ctx=1442, majf=0, minf=9 00:31:26.636 IO depths : 1=0.1%, 2=2.6%, 4=10.5%, 8=72.3%, 16=14.6%, 32=0.0%, >=64=0.0% 00:31:26.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 complete : 0=0.0%, 4=90.0%, 8=7.7%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 issued rwts: total=2322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.636 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.636 filename1: (groupid=0, jobs=1): err= 0: pid=82141: Wed Nov 20 07:28:49 2024 00:31:26.636 read: IOPS=223, BW=894KiB/s (915kB/s)(8960KiB/10025msec) 00:31:26.636 slat (usec): min=3, max=8022, avg=24.64, stdev=338.30 00:31:26.636 clat (msec): min=15, max=119, avg=71.50, stdev=18.87 00:31:26.636 lat (msec): min=15, max=120, avg=71.52, stdev=18.87 00:31:26.636 clat percentiles (msec): 00:31:26.636 | 1.00th=[ 22], 5.00th=[ 26], 10.00th=[ 48], 20.00th=[ 57], 00:31:26.636 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:31:26.636 | 70.00th=[ 85], 80.00th=[ 85], 90.00th=[ 91], 95.00th=[ 96], 00:31:26.636 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 00:31:26.636 | 99.99th=[ 121] 00:31:26.636 bw ( KiB/s): min= 656, max= 1664, per=3.91%, avg=889.60, stdev=204.04, samples=20 00:31:26.636 iops : min= 164, max= 416, avg=222.40, stdev=51.01, samples=20 00:31:26.636 lat (msec) : 20=0.80%, 50=13.71%, 100=83.62%, 250=1.88% 00:31:26.636 cpu : usr=36.33%, sys=1.26%, ctx=1065, majf=0, minf=9 00:31:26.636 IO depths : 1=0.1%, 2=2.1%, 4=8.4%, 8=73.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:31:26.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 complete : 0=0.0%, 4=90.1%, 8=8.1%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.636 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.637 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.637 filename1: (groupid=0, jobs=1): err= 0: pid=82142: Wed Nov 20 07:28:49 2024 00:31:26.637 read: IOPS=251, BW=1004KiB/s (1029kB/s)(9.85MiB/10039msec) 00:31:26.637 slat (usec): min=4, max=4024, avg=13.89, stdev=113.10 00:31:26.637 clat (msec): min=8, max=117, avg=63.59, stdev=21.41 00:31:26.637 lat (msec): min=8, max=117, avg=63.60, stdev=21.41 00:31:26.637 clat percentiles (msec): 00:31:26.637 | 1.00th=[ 11], 5.00th=[ 22], 10.00th=[ 34], 20.00th=[ 48], 00:31:26.637 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 66], 60.00th=[ 73], 00:31:26.637 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 88], 95.00th=[ 91], 00:31:26.637 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 112], 99.95th=[ 112], 00:31:26.637 | 99.99th=[ 118] 00:31:26.637 bw ( KiB/s): min= 784, max= 2208, per=4.42%, avg=1004.10, stdev=338.16, samples=20 00:31:26.637 iops : min= 196, max= 552, avg=251.00, stdev=84.55, samples=20 00:31:26.637 lat (msec) : 10=0.67%, 20=4.13%, 50=22.93%, 100=71.84%, 250=0.44% 00:31:26.637 cpu : usr=43.95%, sys=1.20%, ctx=1320, majf=0, minf=9 00:31:26.637 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.5%, 16=16.5%, 32=0.0%, >=64=0.0% 00:31:26.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.637 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.637 issued rwts: total=2521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.637 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.637 filename2: (groupid=0, jobs=1): err= 0: pid=82143: Wed Nov 20 07:28:49 2024 00:31:26.637 read: IOPS=235, BW=941KiB/s (964kB/s)(9448KiB/10037msec) 00:31:26.637 slat (usec): min=3, max=4023, avg=11.24, stdev=82.84 00:31:26.637 clat (msec): min=11, max=117, avg=67.84, stdev=18.58 00:31:26.637 lat (msec): min=11, max=117, avg=67.85, stdev=18.58 00:31:26.637 clat percentiles (msec): 00:31:26.637 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 50], 00:31:26.637 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 75], 00:31:26.637 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 88], 95.00th=[ 94], 00:31:26.637 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 108], 99.95th=[ 108], 00:31:26.637 | 99.99th=[ 118] 00:31:26.637 bw ( KiB/s): min= 776, max= 1666, per=4.14%, avg=940.50, stdev=200.51, samples=20 00:31:26.637 iops : min= 194, max= 416, avg=235.10, stdev=50.03, samples=20 00:31:26.637 lat (msec) : 20=1.52%, 50=20.24%, 100=77.69%, 250=0.55% 00:31:26.637 cpu : usr=36.74%, sys=0.96%, ctx=1013, majf=0, minf=9 00:31:26.637 IO depths : 1=0.2%, 2=1.6%, 4=5.7%, 8=76.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:31:26.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.637 complete : 0=0.0%, 4=89.0%, 8=9.7%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.637 issued rwts: total=2362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.637 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.637 filename2: (groupid=0, jobs=1): err= 0: pid=82144: Wed Nov 20 07:28:49 2024 00:31:26.637 read: IOPS=233, BW=934KiB/s (957kB/s)(9376KiB/10034msec) 00:31:26.637 slat (usec): min=4, max=4257, avg=13.87, stdev=88.11 00:31:26.637 clat (msec): min=14, max=119, avg=68.34, stdev=18.60 00:31:26.637 lat (msec): min=14, max=120, avg=68.36, stdev=18.60 00:31:26.637 clat percentiles (msec): 00:31:26.637 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 50], 00:31:26.637 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 78], 00:31:26.637 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 88], 95.00th=[ 96], 00:31:26.637 | 99.00th=[ 96], 99.50th=[ 106], 99.90th=[ 111], 99.95th=[ 121], 00:31:26.637 | 99.99th=[ 121] 00:31:26.637 bw ( KiB/s): min= 776, max= 1664, per=4.11%, avg=933.60, stdev=205.59, samples=20 00:31:26.637 iops : min= 194, max= 416, avg=233.40, stdev=51.40, samples=20 00:31:26.637 lat (msec) : 20=0.17%, 50=21.25%, 100=77.86%, 250=0.73% 00:31:26.637 cpu : usr=40.92%, sys=1.14%, ctx=1134, majf=0, minf=9 00:31:26.637 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=75.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:31:26.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.637 complete : 0=0.0%, 4=89.4%, 8=9.1%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.637 issued rwts: total=2344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.637 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.637 filename2: (groupid=0, jobs=1): err= 0: pid=82145: Wed Nov 20 07:28:49 2024 00:31:26.637 read: IOPS=241, BW=968KiB/s (991kB/s)(9704KiB/10025msec) 00:31:26.637 slat (usec): min=3, max=4016, avg=14.96, stdev=81.77 00:31:26.637 clat (msec): min=12, max=120, avg=66.01, stdev=17.46 00:31:26.637 lat (msec): min=12, max=120, avg=66.03, stdev=17.46 00:31:26.637 clat percentiles (msec): 00:31:26.637 | 1.00th=[ 24], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 49], 00:31:26.637 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 72], 00:31:26.637 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 94], 00:31:26.637 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 105], 99.95th=[ 112], 00:31:26.637 | 99.99th=[ 121] 00:31:26.637 bw ( KiB/s): min= 808, max= 1552, per=4.24%, avg=964.00, stdev=175.36, samples=20 00:31:26.637 iops : min= 202, max= 388, avg=241.00, stdev=43.84, samples=20 00:31:26.637 lat (msec) : 20=0.16%, 50=23.70%, 100=75.85%, 250=0.29% 00:31:26.637 cpu : usr=43.07%, sys=0.95%, ctx=1219, majf=0, minf=9 00:31:26.637 IO depths : 1=0.2%, 2=1.2%, 4=4.1%, 8=79.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:31:26.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.637 complete : 0=0.0%, 4=88.3%, 8=10.8%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.637 issued rwts: total=2426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.637 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.637 filename2: (groupid=0, jobs=1): err= 0: pid=82146: Wed Nov 20 07:28:49 2024 00:31:26.637 read: IOPS=247, BW=990KiB/s (1013kB/s)(9908KiB/10013msec) 00:31:26.637 slat (usec): min=3, max=4030, avg=17.84, stdev=161.35 00:31:26.637 clat (msec): min=21, max=107, avg=64.58, stdev=18.02 00:31:26.637 lat (msec): min=21, max=107, avg=64.60, stdev=18.02 00:31:26.637 clat percentiles (msec): 00:31:26.637 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:31:26.637 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 72], 00:31:26.637 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 87], 95.00th=[ 92], 00:31:26.637 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 102], 99.95th=[ 104], 00:31:26.637 | 99.99th=[ 108] 00:31:26.637 bw ( KiB/s): min= 864, max= 1644, per=4.34%, avg=986.40, stdev=186.55, samples=20 00:31:26.637 iops : min= 216, max= 411, avg=246.60, stdev=46.64, samples=20 00:31:26.637 lat (msec) : 50=28.99%, 100=70.81%, 250=0.20% 00:31:26.637 cpu : usr=40.41%, sys=1.13%, ctx=1134, majf=0, minf=9 00:31:26.637 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:31:26.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.637 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.637 issued rwts: total=2477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.637 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.637 filename2: (groupid=0, jobs=1): err= 0: pid=82147: Wed Nov 20 07:28:49 2024 00:31:26.637 read: IOPS=231, BW=926KiB/s (948kB/s)(9272KiB/10011msec) 00:31:26.637 slat (usec): min=2, max=8041, avg=35.72, stdev=439.79 00:31:26.637 clat (msec): min=13, max=120, avg=68.87, stdev=19.02 00:31:26.637 lat (msec): min=13, max=120, avg=68.90, stdev=19.04 00:31:26.637 clat percentiles (msec): 00:31:26.637 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 50], 00:31:26.637 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 77], 00:31:26.637 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 96], 00:31:26.637 | 99.00th=[ 109], 99.50th=[ 116], 99.90th=[ 121], 99.95th=[ 121], 00:31:26.637 | 99.99th=[ 121] 00:31:26.638 bw ( KiB/s): min= 656, max= 1520, per=4.06%, avg=923.20, stdev=188.57, samples=20 00:31:26.638 iops : min= 164, max= 380, avg=230.80, stdev=47.14, samples=20 00:31:26.638 lat (msec) : 20=0.17%, 50=22.09%, 100=74.98%, 250=2.76% 00:31:26.638 cpu : usr=32.19%, sys=1.01%, ctx=874, majf=0, minf=9 00:31:26.638 IO depths : 1=0.1%, 2=2.8%, 4=11.1%, 8=71.7%, 16=14.3%, 32=0.0%, >=64=0.0% 00:31:26.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.638 complete : 0=0.0%, 4=90.0%, 8=7.5%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.638 issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.638 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.638 filename2: (groupid=0, jobs=1): err= 0: pid=82148: Wed Nov 20 07:28:49 2024 00:31:26.638 read: IOPS=235, BW=941KiB/s (964kB/s)(9444KiB/10035msec) 00:31:26.638 slat (usec): min=3, max=8028, avg=17.59, stdev=233.37 00:31:26.638 clat (msec): min=11, max=119, avg=67.79, stdev=18.75 00:31:26.638 lat (msec): min=11, max=119, avg=67.81, stdev=18.75 00:31:26.638 clat percentiles (msec): 00:31:26.638 | 1.00th=[ 15], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 50], 00:31:26.638 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 75], 00:31:26.638 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 88], 95.00th=[ 95], 00:31:26.638 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 120], 00:31:26.638 | 99.99th=[ 120] 00:31:26.638 bw ( KiB/s): min= 680, max= 1673, per=4.14%, avg=940.05, stdev=206.26, samples=20 00:31:26.638 iops : min= 170, max= 418, avg=235.00, stdev=51.52, samples=20 00:31:26.638 lat (msec) : 20=1.44%, 50=19.36%, 100=77.17%, 250=2.03% 00:31:26.638 cpu : usr=40.45%, sys=0.98%, ctx=1166, majf=0, minf=9 00:31:26.638 IO depths : 1=0.2%, 2=1.6%, 4=5.7%, 8=77.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:31:26.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.638 complete : 0=0.0%, 4=89.0%, 8=9.8%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.638 issued rwts: total=2361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.638 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.638 filename2: (groupid=0, jobs=1): err= 0: pid=82149: Wed Nov 20 07:28:49 2024 00:31:26.638 read: IOPS=241, BW=966KiB/s (990kB/s)(9680KiB/10017msec) 00:31:26.638 slat (usec): min=3, max=8031, avg=28.33, stdev=345.11 00:31:26.638 clat (msec): min=15, max=108, avg=66.07, stdev=17.47 00:31:26.638 lat (msec): min=15, max=116, avg=66.10, stdev=17.47 00:31:26.638 clat percentiles (msec): 00:31:26.638 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 49], 00:31:26.638 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 72], 00:31:26.638 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 86], 95.00th=[ 95], 00:31:26.638 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 100], 99.95th=[ 108], 00:31:26.638 | 99.99th=[ 109] 00:31:26.638 bw ( KiB/s): min= 840, max= 1536, per=4.23%, avg=961.60, stdev=164.95, samples=20 00:31:26.638 iops : min= 210, max= 384, avg=240.40, stdev=41.24, samples=20 00:31:26.638 lat (msec) : 20=0.08%, 50=24.26%, 100=75.58%, 250=0.08% 00:31:26.638 cpu : usr=35.69%, sys=0.99%, ctx=981, majf=0, minf=9 00:31:26.638 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=78.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:31:26.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.638 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.638 issued rwts: total=2420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.638 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.638 filename2: (groupid=0, jobs=1): err= 0: pid=82150: Wed Nov 20 07:28:49 2024 00:31:26.638 read: IOPS=247, BW=989KiB/s (1013kB/s)(9896KiB/10005msec) 00:31:26.638 slat (usec): min=2, max=8022, avg=22.35, stdev=285.78 00:31:26.638 clat (msec): min=10, max=108, avg=64.59, stdev=17.56 00:31:26.638 lat (msec): min=10, max=108, avg=64.61, stdev=17.56 00:31:26.638 clat percentiles (msec): 00:31:26.638 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 45], 20.00th=[ 48], 00:31:26.638 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 72], 00:31:26.638 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 86], 95.00th=[ 93], 00:31:26.638 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 99], 99.95th=[ 106], 00:31:26.638 | 99.99th=[ 109] 00:31:26.638 bw ( KiB/s): min= 864, max= 1432, per=4.34%, avg=986.10, stdev=163.92, samples=20 00:31:26.638 iops : min= 216, max= 358, avg=246.50, stdev=40.95, samples=20 00:31:26.638 lat (msec) : 20=0.24%, 50=27.97%, 100=71.71%, 250=0.08% 00:31:26.638 cpu : usr=37.96%, sys=1.21%, ctx=1115, majf=0, minf=9 00:31:26.638 IO depths : 1=0.2%, 2=1.0%, 4=3.4%, 8=80.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:31:26.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.638 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.638 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.638 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:26.638 00:31:26.638 Run status group 0 (all jobs): 00:31:26.638 READ: bw=22.2MiB/s (23.3MB/s), 885KiB/s-1004KiB/s (906kB/s-1029kB/s), io=223MiB (234MB), run=10005-10054msec 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.638 bdev_null0 00:31:26.638 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.639 [2024-11-20 07:28:49.429023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.639 bdev_null1 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:26.639 { 00:31:26.639 "params": { 00:31:26.639 "name": "Nvme$subsystem", 00:31:26.639 "trtype": "$TEST_TRANSPORT", 00:31:26.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.639 "adrfam": "ipv4", 00:31:26.639 "trsvcid": "$NVMF_PORT", 00:31:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.639 "hdgst": ${hdgst:-false}, 00:31:26.639 "ddgst": ${ddgst:-false} 00:31:26.639 }, 00:31:26.639 "method": "bdev_nvme_attach_controller" 00:31:26.639 } 00:31:26.639 EOF 00:31:26.639 )") 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:26.639 { 00:31:26.639 "params": { 00:31:26.639 "name": "Nvme$subsystem", 00:31:26.639 "trtype": "$TEST_TRANSPORT", 00:31:26.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.639 "adrfam": "ipv4", 00:31:26.639 "trsvcid": "$NVMF_PORT", 00:31:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.639 "hdgst": ${hdgst:-false}, 00:31:26.639 "ddgst": ${ddgst:-false} 00:31:26.639 }, 00:31:26.639 "method": "bdev_nvme_attach_controller" 00:31:26.639 } 00:31:26.639 EOF 00:31:26.639 )") 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:31:26.639 "params": { 00:31:26.639 "name": "Nvme0", 00:31:26.639 "trtype": "tcp", 00:31:26.639 "traddr": "10.0.0.2", 00:31:26.639 "adrfam": "ipv4", 00:31:26.639 "trsvcid": "4420", 00:31:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:26.639 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:26.639 "hdgst": false, 00:31:26.639 "ddgst": false 00:31:26.639 }, 00:31:26.639 "method": "bdev_nvme_attach_controller" 00:31:26.639 },{ 00:31:26.639 "params": { 00:31:26.639 "name": "Nvme1", 00:31:26.639 "trtype": "tcp", 00:31:26.639 "traddr": "10.0.0.2", 00:31:26.639 "adrfam": "ipv4", 00:31:26.639 "trsvcid": "4420", 00:31:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:26.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:26.639 "hdgst": false, 00:31:26.639 "ddgst": false 00:31:26.639 }, 00:31:26.639 "method": "bdev_nvme_attach_controller" 00:31:26.639 }' 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:26.639 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:26.640 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:26.640 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:26.640 07:28:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.640 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:26.640 ... 00:31:26.640 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:26.640 ... 00:31:26.640 fio-3.35 00:31:26.640 Starting 4 threads 00:31:31.902 00:31:31.902 filename0: (groupid=0, jobs=1): err= 0: pid=82301: Wed Nov 20 07:28:55 2024 00:31:31.902 read: IOPS=2901, BW=22.7MiB/s (23.8MB/s)(113MiB/5002msec) 00:31:31.902 slat (usec): min=5, max=423, avg= 8.34, stdev= 5.50 00:31:31.902 clat (usec): min=699, max=8652, avg=2734.01, stdev=740.76 00:31:31.902 lat (usec): min=706, max=8672, avg=2742.35, stdev=740.77 00:31:31.902 clat percentiles (usec): 00:31:31.902 | 1.00th=[ 1172], 5.00th=[ 1483], 10.00th=[ 1631], 20.00th=[ 1991], 00:31:31.902 | 30.00th=[ 2278], 40.00th=[ 2671], 50.00th=[ 2900], 60.00th=[ 2933], 00:31:31.902 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3589], 95.00th=[ 3720], 00:31:31.902 | 99.00th=[ 4080], 99.50th=[ 4490], 99.90th=[ 4948], 99.95th=[ 8586], 00:31:31.902 | 99.99th=[ 8586] 00:31:31.902 bw ( KiB/s): min=22272, max=24336, per=26.12%, avg=22986.67, stdev=654.53, samples=9 00:31:31.902 iops : min= 2784, max= 3042, avg=2873.33, stdev=81.82, samples=9 00:31:31.902 lat (usec) : 750=0.06%, 1000=0.39% 00:31:31.902 lat (msec) : 2=19.72%, 4=78.54%, 10=1.30% 00:31:31.902 cpu : usr=92.96%, sys=6.16%, ctx=41, majf=0, minf=9 00:31:31.902 IO depths : 1=0.1%, 2=6.3%, 4=61.5%, 8=32.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.902 complete : 0=0.0%, 4=97.6%, 8=2.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.902 issued rwts: total=14515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.902 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:31.902 filename0: (groupid=0, jobs=1): err= 0: pid=82302: Wed Nov 20 07:28:55 2024 00:31:31.902 read: IOPS=2714, BW=21.2MiB/s (22.2MB/s)(106MiB/5001msec) 00:31:31.902 slat (nsec): min=4798, max=39005, avg=9220.83, stdev=4441.90 00:31:31.902 clat (usec): min=529, max=5400, avg=2919.67, stdev=802.36 00:31:31.902 lat (usec): min=535, max=5408, avg=2928.89, stdev=802.73 00:31:31.902 clat percentiles (usec): 00:31:31.902 | 1.00th=[ 996], 5.00th=[ 1516], 10.00th=[ 1647], 20.00th=[ 2212], 00:31:31.902 | 30.00th=[ 2671], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 3195], 00:31:31.902 | 70.00th=[ 3359], 80.00th=[ 3654], 90.00th=[ 3752], 95.00th=[ 3982], 00:31:31.902 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 4817], 99.95th=[ 5080], 00:31:31.902 | 99.99th=[ 5407] 00:31:31.902 bw ( KiB/s): min=17152, max=24736, per=24.20%, avg=21299.56, stdev=2633.52, samples=9 00:31:31.902 iops : min= 2144, max= 3092, avg=2662.44, stdev=329.19, samples=9 00:31:31.902 lat (usec) : 750=0.12%, 1000=0.95% 00:31:31.902 lat (msec) : 2=16.04%, 4=78.13%, 10=4.77% 00:31:31.902 cpu : usr=94.18%, sys=5.18%, ctx=8, majf=0, minf=10 00:31:31.902 IO depths : 1=0.1%, 2=10.0%, 4=58.8%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.902 complete : 0=0.0%, 4=96.2%, 8=3.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.902 issued rwts: total=13574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.902 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:31.902 filename1: (groupid=0, jobs=1): err= 0: pid=82303: Wed Nov 20 07:28:55 2024 00:31:31.902 read: IOPS=2696, BW=21.1MiB/s (22.1MB/s)(105MiB/5001msec) 00:31:31.902 slat (nsec): min=3693, max=36850, avg=8419.64, stdev=4703.76 00:31:31.902 clat (usec): min=930, max=5767, avg=2940.48, stdev=624.03 00:31:31.902 lat (usec): min=937, max=5779, avg=2948.90, stdev=624.45 00:31:31.902 clat percentiles (usec): 00:31:31.902 | 1.00th=[ 1237], 5.00th=[ 1680], 10.00th=[ 1991], 20.00th=[ 2442], 00:31:31.902 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 3228], 00:31:31.902 | 70.00th=[ 3392], 80.00th=[ 3458], 90.00th=[ 3621], 95.00th=[ 3720], 00:31:31.902 | 99.00th=[ 3982], 99.50th=[ 4047], 99.90th=[ 4359], 99.95th=[ 4621], 00:31:31.902 | 99.99th=[ 5145] 00:31:31.902 bw ( KiB/s): min=20128, max=23648, per=24.89%, avg=21900.44, stdev=1336.26, samples=9 00:31:31.902 iops : min= 2516, max= 2956, avg=2737.56, stdev=167.03, samples=9 00:31:31.902 lat (usec) : 1000=0.27% 00:31:31.902 lat (msec) : 2=9.77%, 4=89.15%, 10=0.81% 00:31:31.902 cpu : usr=93.24%, sys=6.26%, ctx=9, majf=0, minf=9 00:31:31.902 IO depths : 1=0.1%, 2=12.5%, 4=58.5%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.902 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.902 issued rwts: total=13487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.902 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:31.902 filename1: (groupid=0, jobs=1): err= 0: pid=82304: Wed Nov 20 07:28:55 2024 00:31:31.902 read: IOPS=2688, BW=21.0MiB/s (22.0MB/s)(105MiB/5002msec) 00:31:31.902 slat (nsec): min=3861, max=38818, avg=10033.09, stdev=5381.07 00:31:31.902 clat (usec): min=719, max=5147, avg=2942.43, stdev=622.84 00:31:31.902 lat (usec): min=728, max=5158, avg=2952.46, stdev=622.67 00:31:31.902 clat percentiles (usec): 00:31:31.902 | 1.00th=[ 1254], 5.00th=[ 1696], 10.00th=[ 2008], 20.00th=[ 2409], 00:31:31.902 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 3228], 00:31:31.902 | 70.00th=[ 3392], 80.00th=[ 3458], 90.00th=[ 3621], 95.00th=[ 3720], 00:31:31.902 | 99.00th=[ 3982], 99.50th=[ 4047], 99.90th=[ 4621], 99.95th=[ 4686], 00:31:31.902 | 99.99th=[ 4883] 00:31:31.902 bw ( KiB/s): min=20032, max=23648, per=24.81%, avg=21832.89, stdev=1314.51, samples=9 00:31:31.902 iops : min= 2504, max= 2956, avg=2729.11, stdev=164.31, samples=9 00:31:31.902 lat (usec) : 750=0.01%, 1000=0.24% 00:31:31.902 lat (msec) : 2=9.67%, 4=89.26%, 10=0.83% 00:31:31.902 cpu : usr=94.36%, sys=5.02%, ctx=47, majf=0, minf=9 00:31:31.902 IO depths : 1=0.1%, 2=12.6%, 4=58.4%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.902 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.902 issued rwts: total=13449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.902 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:31.902 00:31:31.902 Run status group 0 (all jobs): 00:31:31.902 READ: bw=85.9MiB/s (90.1MB/s), 21.0MiB/s-22.7MiB/s (22.0MB/s-23.8MB/s), io=430MiB (451MB), run=5001-5002msec 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.902 ************************************ 00:31:31.902 END TEST fio_dif_rand_params 00:31:31.902 ************************************ 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.902 00:31:31.902 real 0m22.888s 00:31:31.902 user 2m7.277s 00:31:31.902 sys 0m5.374s 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:31.902 07:28:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.902 07:28:55 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:31.902 07:28:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:31.902 07:28:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:31.902 07:28:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:31.902 ************************************ 00:31:31.902 START TEST fio_dif_digest 00:31:31.902 ************************************ 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:31.902 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:31.903 bdev_null0 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:31.903 [2024-11-20 07:28:55.422070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # config=() 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # local subsystem config 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:31.903 { 00:31:31.903 "params": { 00:31:31.903 "name": "Nvme$subsystem", 00:31:31.903 "trtype": "$TEST_TRANSPORT", 00:31:31.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:31.903 "adrfam": "ipv4", 00:31:31.903 "trsvcid": "$NVMF_PORT", 00:31:31.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:31.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:31.903 "hdgst": ${hdgst:-false}, 00:31:31.903 "ddgst": ${ddgst:-false} 00:31:31.903 }, 00:31:31.903 "method": "bdev_nvme_attach_controller" 00:31:31.903 } 00:31:31.903 EOF 00:31:31.903 )") 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # cat 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@396 -- # jq . 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@397 -- # IFS=, 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:31:31.903 "params": { 00:31:31.903 "name": "Nvme0", 00:31:31.903 "trtype": "tcp", 00:31:31.903 "traddr": "10.0.0.2", 00:31:31.903 "adrfam": "ipv4", 00:31:31.903 "trsvcid": "4420", 00:31:31.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:31.903 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:31.903 "hdgst": true, 00:31:31.903 "ddgst": true 00:31:31.903 }, 00:31:31.903 "method": "bdev_nvme_attach_controller" 00:31:31.903 }' 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:31.903 07:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:31.903 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:31.903 ... 00:31:31.903 fio-3.35 00:31:31.903 Starting 3 threads 00:31:41.891 00:31:41.891 filename0: (groupid=0, jobs=1): err= 0: pid=82411: Wed Nov 20 07:29:06 2024 00:31:41.891 read: IOPS=299, BW=37.5MiB/s (39.3MB/s)(375MiB/10007msec) 00:31:41.891 slat (nsec): min=4501, max=20982, avg=7337.29, stdev=1276.18 00:31:41.891 clat (usec): min=4750, max=10203, avg=9990.03, stdev=204.55 00:31:41.891 lat (usec): min=4755, max=10210, avg=9997.37, stdev=204.49 00:31:41.891 clat percentiles (usec): 00:31:41.891 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10028], 00:31:41.891 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10028], 60.00th=[10028], 00:31:41.891 | 70.00th=[10028], 80.00th=[10028], 90.00th=[10028], 95.00th=[10028], 00:31:41.891 | 99.00th=[10159], 99.50th=[10159], 99.90th=[10159], 99.95th=[10159], 00:31:41.891 | 99.99th=[10159] 00:31:41.891 bw ( KiB/s): min=37632, max=38400, per=33.34%, avg=38351.53, stdev=177.73, samples=19 00:31:41.891 iops : min= 294, max= 300, avg=299.58, stdev= 1.43, samples=19 00:31:41.891 lat (msec) : 10=66.17%, 20=33.83% 00:31:41.891 cpu : usr=92.86%, sys=6.71%, ctx=17, majf=0, minf=9 00:31:41.891 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.891 issued rwts: total=3000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.891 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:41.891 filename0: (groupid=0, jobs=1): err= 0: pid=82412: Wed Nov 20 07:29:06 2024 00:31:41.891 read: IOPS=299, BW=37.4MiB/s (39.3MB/s)(375MiB/10004msec) 00:31:41.891 slat (nsec): min=4127, max=19109, avg=6923.10, stdev=1132.83 00:31:41.891 clat (usec): min=7202, max=10505, avg=9997.37, stdev=100.31 00:31:41.891 lat (usec): min=7209, max=10518, avg=10004.29, stdev=100.34 00:31:41.891 clat percentiles (usec): 00:31:41.891 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10028], 00:31:41.891 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10028], 60.00th=[10028], 00:31:41.891 | 70.00th=[10028], 80.00th=[10028], 90.00th=[10028], 95.00th=[10028], 00:31:41.891 | 99.00th=[10159], 99.50th=[10159], 99.90th=[10552], 99.95th=[10552], 00:31:41.891 | 99.99th=[10552] 00:31:41.891 bw ( KiB/s): min=37632, max=38400, per=33.31%, avg=38319.16, stdev=242.15, samples=19 00:31:41.891 iops : min= 294, max= 300, avg=299.37, stdev= 1.89, samples=19 00:31:41.891 lat (msec) : 10=65.43%, 20=34.57% 00:31:41.891 cpu : usr=91.80%, sys=7.79%, ctx=8, majf=0, minf=0 00:31:41.891 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.891 issued rwts: total=2997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.891 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:41.891 filename0: (groupid=0, jobs=1): err= 0: pid=82413: Wed Nov 20 07:29:06 2024 00:31:41.891 read: IOPS=299, BW=37.4MiB/s (39.3MB/s)(375MiB/10004msec) 00:31:41.891 slat (nsec): min=3828, max=25950, avg=7240.32, stdev=1262.41 00:31:41.891 clat (usec): min=5661, max=11765, avg=9996.45, stdev=154.44 00:31:41.891 lat (usec): min=5667, max=11776, avg=10003.69, stdev=154.44 00:31:41.891 clat percentiles (usec): 00:31:41.891 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10028], 00:31:41.891 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10028], 60.00th=[10028], 00:31:41.891 | 70.00th=[10028], 80.00th=[10028], 90.00th=[10028], 95.00th=[10028], 00:31:41.891 | 99.00th=[10159], 99.50th=[10159], 99.90th=[11731], 99.95th=[11731], 00:31:41.891 | 99.99th=[11731] 00:31:41.891 bw ( KiB/s): min=37632, max=38400, per=33.31%, avg=38323.11, stdev=230.67, samples=19 00:31:41.891 iops : min= 294, max= 300, avg=299.37, stdev= 1.89, samples=19 00:31:41.891 lat (msec) : 10=66.80%, 20=33.20% 00:31:41.891 cpu : usr=92.63%, sys=6.95%, ctx=22, majf=0, minf=0 00:31:41.891 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.891 issued rwts: total=2997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.891 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:41.891 00:31:41.891 Run status group 0 (all jobs): 00:31:41.891 READ: bw=112MiB/s (118MB/s), 37.4MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=1124MiB (1179MB), run=10004-10007msec 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:42.149 ************************************ 00:31:42.149 END TEST fio_dif_digest 00:31:42.149 ************************************ 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.149 00:31:42.149 real 0m10.815s 00:31:42.149 user 0m28.244s 00:31:42.149 sys 0m2.325s 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:42.149 07:29:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:42.149 07:29:06 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:42.149 07:29:06 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:42.149 07:29:06 nvmf_dif -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:42.149 07:29:06 nvmf_dif -- nvmf/common.sh@99 -- # sync 00:31:42.149 07:29:06 nvmf_dif -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:42.149 07:29:06 nvmf_dif -- nvmf/common.sh@102 -- # set +e 00:31:42.149 07:29:06 nvmf_dif -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:42.149 07:29:06 nvmf_dif -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:42.149 rmmod nvme_tcp 00:31:42.149 rmmod nvme_fabrics 00:31:42.149 rmmod nvme_keyring 00:31:42.149 07:29:06 nvmf_dif -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:42.149 07:29:06 nvmf_dif -- nvmf/common.sh@106 -- # set -e 00:31:42.149 07:29:06 nvmf_dif -- nvmf/common.sh@107 -- # return 0 00:31:42.149 07:29:06 nvmf_dif -- nvmf/common.sh@336 -- # '[' -n 81642 ']' 00:31:42.149 07:29:06 nvmf_dif -- nvmf/common.sh@337 -- # killprocess 81642 00:31:42.149 07:29:06 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 81642 ']' 00:31:42.149 07:29:06 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 81642 00:31:42.149 07:29:06 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:31:42.149 07:29:06 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:42.149 07:29:06 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81642 00:31:42.149 killing process with pid 81642 00:31:42.149 07:29:06 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:42.149 07:29:06 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:42.149 07:29:06 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81642' 00:31:42.149 07:29:06 nvmf_dif -- common/autotest_common.sh@973 -- # kill 81642 00:31:42.149 07:29:06 nvmf_dif -- common/autotest_common.sh@978 -- # wait 81642 00:31:42.407 07:29:06 nvmf_dif -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:31:42.407 07:29:06 nvmf_dif -- nvmf/common.sh@340 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:42.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:42.664 Waiting for block devices as requested 00:31:42.664 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:42.664 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:42.921 07:29:06 nvmf_dif -- nvmf/common.sh@342 -- # nvmf_fini 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@254 -- # local dev 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@257 -- # remove_target_ns 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:42.921 07:29:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:31:42.921 07:29:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@258 -- # delete_main_bridge 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:31:42.921 07:29:06 nvmf_dif -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@261 -- # continue 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@261 -- # continue 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@41 -- # _dev=0 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@41 -- # dev_map=() 00:31:42.921 07:29:07 nvmf_dif -- nvmf/setup.sh@274 -- # iptr 00:31:42.921 07:29:07 nvmf_dif -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:31:42.921 07:29:07 nvmf_dif -- nvmf/common.sh@548 -- # iptables-save 00:31:42.921 07:29:07 nvmf_dif -- nvmf/common.sh@548 -- # iptables-restore 00:31:42.921 00:31:42.921 real 0m58.189s 00:31:42.921 user 3m51.358s 00:31:42.921 sys 0m14.245s 00:31:42.921 07:29:07 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:42.921 07:29:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:42.921 ************************************ 00:31:42.921 END TEST nvmf_dif 00:31:42.921 ************************************ 00:31:42.921 07:29:07 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:42.921 07:29:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:42.921 07:29:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:42.921 07:29:07 -- common/autotest_common.sh@10 -- # set +x 00:31:42.921 ************************************ 00:31:42.921 START TEST nvmf_abort_qd_sizes 00:31:42.921 ************************************ 00:31:42.921 07:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:43.181 * Looking for test storage... 00:31:43.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:43.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.181 --rc genhtml_branch_coverage=1 00:31:43.181 --rc genhtml_function_coverage=1 00:31:43.181 --rc genhtml_legend=1 00:31:43.181 --rc geninfo_all_blocks=1 00:31:43.181 --rc geninfo_unexecuted_blocks=1 00:31:43.181 00:31:43.181 ' 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:43.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.181 --rc genhtml_branch_coverage=1 00:31:43.181 --rc genhtml_function_coverage=1 00:31:43.181 --rc genhtml_legend=1 00:31:43.181 --rc geninfo_all_blocks=1 00:31:43.181 --rc geninfo_unexecuted_blocks=1 00:31:43.181 00:31:43.181 ' 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:43.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.181 --rc genhtml_branch_coverage=1 00:31:43.181 --rc genhtml_function_coverage=1 00:31:43.181 --rc genhtml_legend=1 00:31:43.181 --rc geninfo_all_blocks=1 00:31:43.181 --rc geninfo_unexecuted_blocks=1 00:31:43.181 00:31:43.181 ' 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:43.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.181 --rc genhtml_branch_coverage=1 00:31:43.181 --rc genhtml_function_coverage=1 00:31:43.181 --rc genhtml_legend=1 00:31:43.181 --rc geninfo_all_blocks=1 00:31:43.181 --rc geninfo_unexecuted_blocks=1 00:31:43.181 00:31:43.181 ' 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # : 0 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.181 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:31:43.181 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # remove_target_ns 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@280 -- # nvmf_veth_init 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@223 -- # create_target_ns 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # create_main_bridge 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@105 -- # delete_main_bridge 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # return 0 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@28 -- # local -g _dev 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up initiator0 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up target0 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target0 up 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up target0_br 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # add_to_ns target0 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772161 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:31:43.182 10.0.0.1 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772162 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:31:43.182 10.0.0.2 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@66 -- # set_up initiator0 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:31:43.182 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up target0_br 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:31:43.183 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up initiator1 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up target1 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target1 up 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up target1_br 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:43.442 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # add_to_ns target1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772163 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:31:43.443 10.0.0.3 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772164 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:31:43.443 10.0.0.4 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@66 -- # set_up initiator1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up target1_br 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@38 -- # ping_ips 2 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator0 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator0 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:43.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:31:43.443 00:31:43.443 --- 10.0.0.1 ping statistics --- 00:31:43.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.443 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target0 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target0 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:31:43.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.021 ms 00:31:43.443 00:31:43.443 --- 10.0.0.2 ping statistics --- 00:31:43.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.443 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:43.443 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator1 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator1 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:31:43.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:43.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:31:43.444 00:31:43.444 --- 10.0.0.3 ping statistics --- 00:31:43.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.444 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target1 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target1 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target1 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:31:43.444 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:43.444 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:31:43.444 00:31:43.444 --- 10.0.0.4 ping statistics --- 00:31:43.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.444 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # return 0 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:31:43.444 07:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:44.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:44.009 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:44.009 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator0 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator0 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator1 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator1 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target0 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target0 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:44.268 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target1 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target1 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target1 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:31:44.269 ' 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # nvmfpid=83058 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # waitforlisten 83058 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 83058 ']' 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:44.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:44.269 07:29:08 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:44.269 [2024-11-20 07:29:08.327175] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:31:44.269 [2024-11-20 07:29:08.327240] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.269 [2024-11-20 07:29:08.463735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:44.526 [2024-11-20 07:29:08.495701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.526 [2024-11-20 07:29:08.495735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.526 [2024-11-20 07:29:08.495741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.526 [2024-11-20 07:29:08.495745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.526 [2024-11-20 07:29:08.495749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.526 [2024-11-20 07:29:08.496360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.526 [2024-11-20 07:29:08.496462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.527 [2024-11-20 07:29:08.496464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:44.527 [2024-11-20 07:29:08.496414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.527 [2024-11-20 07:29:08.525327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:45.094 07:29:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:45.094 07:29:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:31:45.094 07:29:09 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:45.094 07:29:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:45.094 07:29:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:45.094 07:29:09 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.094 07:29:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:45.094 07:29:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:45.095 07:29:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:45.095 ************************************ 00:31:45.095 START TEST spdk_target_abort 00:31:45.095 ************************************ 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:45.095 spdk_targetn1 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:45.095 [2024-11-20 07:29:09.271985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.095 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:45.352 [2024-11-20 07:29:09.311382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:45.352 07:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:48.629 Initializing NVMe Controllers 00:31:48.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:48.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:48.629 Initialization complete. Launching workers. 00:31:48.629 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15924, failed: 0 00:31:48.629 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1023, failed to submit 14901 00:31:48.629 success 873, unsuccessful 150, failed 0 00:31:48.629 07:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:48.629 07:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:51.910 Initializing NVMe Controllers 00:31:51.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:51.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:51.910 Initialization complete. Launching workers. 00:31:51.910 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8976, failed: 0 00:31:51.910 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1150, failed to submit 7826 00:31:51.910 success 379, unsuccessful 771, failed 0 00:31:51.910 07:29:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:51.910 07:29:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:55.189 Initializing NVMe Controllers 00:31:55.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:55.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:55.189 Initialization complete. Launching workers. 00:31:55.189 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37612, failed: 0 00:31:55.189 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2226, failed to submit 35386 00:31:55.189 success 531, unsuccessful 1695, failed 0 00:31:55.189 07:29:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:55.189 07:29:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.189 07:29:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.189 07:29:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.189 07:29:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:55.189 07:29:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.189 07:29:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:57.132 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.132 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83058 00:31:57.132 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 83058 ']' 00:31:57.132 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 83058 00:31:57.132 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:31:57.132 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.132 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83058 00:31:57.132 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:57.132 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:57.132 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83058' 00:31:57.132 killing process with pid 83058 00:31:57.132 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 83058 00:31:57.132 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 83058 00:31:57.132 00:31:57.132 real 0m11.930s 00:31:57.132 user 0m47.838s 00:31:57.133 sys 0m1.867s 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:57.133 ************************************ 00:31:57.133 END TEST spdk_target_abort 00:31:57.133 ************************************ 00:31:57.133 07:29:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:57.133 07:29:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:57.133 07:29:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:57.133 07:29:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:57.133 ************************************ 00:31:57.133 START TEST kernel_target_abort 00:31:57.133 ************************************ 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@101 -- # echo initiator0 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # dev=initiator0 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@441 -- # local block nvme 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@444 -- # modprobe nvmet 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:57.133 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:57.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:57.391 Waiting for block devices as requested 00:31:57.391 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:57.391 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:57.650 No valid GPT data, bailing 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n2 ]] 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n2 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n2 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:31:57.650 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:31:57.650 No valid GPT data, bailing 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n2 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n3 ]] 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n3 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n3 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:31:57.651 No valid GPT data, bailing 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n3 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme1n1 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme1n1 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:57.651 No valid GPT data, bailing 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme1n1 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@458 -- # [[ -b /dev/nvme1n1 ]] 00:31:57.651 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:57.909 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:57.909 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:57.909 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:57.909 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@469 -- # echo 1 00:31:57.909 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@470 -- # echo /dev/nvme1n1 00:31:57.909 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@471 -- # echo 1 00:31:57.909 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:31:57.909 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@474 -- # echo tcp 00:31:57.909 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@475 -- # echo 4420 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@476 -- # echo ipv4 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 --hostid=6878406f-1821-4d15-bee4-f9cf994eb227 -a 10.0.0.1 -t tcp -s 4420 00:31:57.910 00:31:57.910 Discovery Log Number of Records 2, Generation counter 2 00:31:57.910 =====Discovery Log Entry 0====== 00:31:57.910 trtype: tcp 00:31:57.910 adrfam: ipv4 00:31:57.910 subtype: current discovery subsystem 00:31:57.910 treq: not specified, sq flow control disable supported 00:31:57.910 portid: 1 00:31:57.910 trsvcid: 4420 00:31:57.910 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:57.910 traddr: 10.0.0.1 00:31:57.910 eflags: none 00:31:57.910 sectype: none 00:31:57.910 =====Discovery Log Entry 1====== 00:31:57.910 trtype: tcp 00:31:57.910 adrfam: ipv4 00:31:57.910 subtype: nvme subsystem 00:31:57.910 treq: not specified, sq flow control disable supported 00:31:57.910 portid: 1 00:31:57.910 trsvcid: 4420 00:31:57.910 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:57.910 traddr: 10.0.0.1 00:31:57.910 eflags: none 00:31:57.910 sectype: none 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:57.910 07:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:01.195 Initializing NVMe Controllers 00:32:01.195 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:01.195 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:01.195 Initialization complete. Launching workers. 00:32:01.195 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58675, failed: 0 00:32:01.195 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 58675, failed to submit 0 00:32:01.195 success 0, unsuccessful 58675, failed 0 00:32:01.195 07:29:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:01.195 07:29:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:04.487 Initializing NVMe Controllers 00:32:04.487 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:04.487 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:04.487 Initialization complete. Launching workers. 00:32:04.487 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77018, failed: 0 00:32:04.487 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34791, failed to submit 42227 00:32:04.487 success 0, unsuccessful 34791, failed 0 00:32:04.487 07:29:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:04.487 07:29:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:07.768 Initializing NVMe Controllers 00:32:07.768 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:07.768 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:07.768 Initialization complete. Launching workers. 00:32:07.768 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 111903, failed: 0 00:32:07.768 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27978, failed to submit 83925 00:32:07.768 success 0, unsuccessful 27978, failed 0 00:32:07.768 07:29:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:07.768 07:29:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:07.768 07:29:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@488 -- # echo 0 00:32:07.768 07:29:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:07.768 07:29:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:07.768 07:29:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:07.768 07:29:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:07.768 07:29:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:32:07.768 07:29:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:32:07.768 07:29:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:07.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:11.127 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:11.127 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:11.127 00:32:11.127 real 0m14.130s 00:32:11.127 user 0m7.080s 00:32:11.127 sys 0m5.031s 00:32:11.127 07:29:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.127 07:29:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:11.127 ************************************ 00:32:11.127 END TEST kernel_target_abort 00:32:11.127 ************************************ 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@99 -- # sync 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@102 -- # set +e 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:11.386 rmmod nvme_tcp 00:32:11.386 rmmod nvme_fabrics 00:32:11.386 rmmod nvme_keyring 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@106 -- # set -e 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@107 -- # return 0 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # '[' -n 83058 ']' 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # killprocess 83058 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 83058 ']' 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 83058 00:32:11.386 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83058) - No such process 00:32:11.386 Process with pid 83058 is not found 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 83058 is not found' 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:32:11.386 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:11.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:11.643 Waiting for block devices as requested 00:32:11.643 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:11.643 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:11.643 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # nvmf_fini 00:32:11.643 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@254 -- # local dev 00:32:11.643 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@257 -- # remove_target_ns 00:32:11.643 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:11.643 07:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:32:11.643 07:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:11.901 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@258 -- # delete_main_bridge 00:32:11.901 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:11.901 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:32:11.901 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:32:11.901 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:32:11.901 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:32:11.901 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:32:11.901 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:11.901 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:32:11.901 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # continue 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # continue 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # _dev=0 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # dev_map=() 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@274 -- # iptr 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-save 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-restore 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:32:11.902 00:32:11.902 real 0m28.902s 00:32:11.902 user 0m55.901s 00:32:11.902 sys 0m8.001s 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.902 07:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:11.902 ************************************ 00:32:11.902 END TEST nvmf_abort_qd_sizes 00:32:11.902 ************************************ 00:32:11.902 07:29:36 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:32:11.902 07:29:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:11.902 07:29:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.902 07:29:36 -- common/autotest_common.sh@10 -- # set +x 00:32:11.902 ************************************ 00:32:11.902 START TEST keyring_file 00:32:11.902 ************************************ 00:32:11.902 07:29:36 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:32:11.902 * Looking for test storage... 00:32:11.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:32:11.902 07:29:36 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:12.161 07:29:36 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:32:12.161 07:29:36 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:12.161 07:29:36 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@345 -- # : 1 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@353 -- # local d=1 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@355 -- # echo 1 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@353 -- # local d=2 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@355 -- # echo 2 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:12.161 07:29:36 keyring_file -- scripts/common.sh@368 -- # return 0 00:32:12.161 07:29:36 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:12.161 07:29:36 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:12.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.161 --rc genhtml_branch_coverage=1 00:32:12.161 --rc genhtml_function_coverage=1 00:32:12.161 --rc genhtml_legend=1 00:32:12.161 --rc geninfo_all_blocks=1 00:32:12.161 --rc geninfo_unexecuted_blocks=1 00:32:12.161 00:32:12.161 ' 00:32:12.161 07:29:36 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:12.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.161 --rc genhtml_branch_coverage=1 00:32:12.161 --rc genhtml_function_coverage=1 00:32:12.161 --rc genhtml_legend=1 00:32:12.161 --rc geninfo_all_blocks=1 00:32:12.161 --rc geninfo_unexecuted_blocks=1 00:32:12.161 00:32:12.161 ' 00:32:12.161 07:29:36 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:12.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.161 --rc genhtml_branch_coverage=1 00:32:12.161 --rc genhtml_function_coverage=1 00:32:12.161 --rc genhtml_legend=1 00:32:12.161 --rc geninfo_all_blocks=1 00:32:12.161 --rc geninfo_unexecuted_blocks=1 00:32:12.162 00:32:12.162 ' 00:32:12.162 07:29:36 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:12.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.162 --rc genhtml_branch_coverage=1 00:32:12.162 --rc genhtml_function_coverage=1 00:32:12.162 --rc genhtml_legend=1 00:32:12.162 --rc geninfo_all_blocks=1 00:32:12.162 --rc geninfo_unexecuted_blocks=1 00:32:12.162 00:32:12.162 ' 00:32:12.162 07:29:36 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:12.162 07:29:36 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:32:12.162 07:29:36 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:12.162 07:29:36 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:12.162 07:29:36 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:12.162 07:29:36 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.162 07:29:36 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.162 07:29:36 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.162 07:29:36 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:12.162 07:29:36 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:32:12.162 07:29:36 keyring_file -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:12.162 07:29:36 keyring_file -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:12.162 07:29:36 keyring_file -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@50 -- # : 0 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:12.162 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:12.162 07:29:36 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:12.162 07:29:36 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:12.162 07:29:36 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:12.162 07:29:36 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:12.162 07:29:36 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:12.162 07:29:36 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nOvKF2OmU2 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@507 -- # python - 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nOvKF2OmU2 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nOvKF2OmU2 00:32:12.162 07:29:36 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.nOvKF2OmU2 00:32:12.162 07:29:36 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UclbD1pVUj 00:32:12.162 07:29:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:12.162 07:29:36 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:12.163 07:29:36 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:32:12.163 07:29:36 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:32:12.163 07:29:36 keyring_file -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:32:12.163 07:29:36 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:32:12.163 07:29:36 keyring_file -- nvmf/common.sh@507 -- # python - 00:32:12.163 07:29:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UclbD1pVUj 00:32:12.163 07:29:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UclbD1pVUj 00:32:12.163 07:29:36 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.UclbD1pVUj 00:32:12.163 07:29:36 keyring_file -- keyring/file.sh@30 -- # tgtpid=83975 00:32:12.163 07:29:36 keyring_file -- keyring/file.sh@32 -- # waitforlisten 83975 00:32:12.163 07:29:36 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:12.163 07:29:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 83975 ']' 00:32:12.163 07:29:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.163 07:29:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:12.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.163 07:29:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.163 07:29:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:12.163 07:29:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:12.163 [2024-11-20 07:29:36.321658] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:32:12.163 [2024-11-20 07:29:36.321719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83975 ] 00:32:12.421 [2024-11-20 07:29:36.460947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.421 [2024-11-20 07:29:36.496334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.421 [2024-11-20 07:29:36.538443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:12.987 07:29:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.987 07:29:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:12.987 07:29:37 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:12.987 07:29:37 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.987 07:29:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:13.246 [2024-11-20 07:29:37.190155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.246 null0 00:32:13.246 [2024-11-20 07:29:37.222130] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:13.246 [2024-11-20 07:29:37.222265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.246 07:29:37 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:13.246 [2024-11-20 07:29:37.250121] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:13.246 request: 00:32:13.246 { 00:32:13.246 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:13.246 "secure_channel": false, 00:32:13.246 "listen_address": { 00:32:13.246 "trtype": "tcp", 00:32:13.246 "traddr": "127.0.0.1", 00:32:13.246 "trsvcid": "4420" 00:32:13.246 }, 00:32:13.246 "method": "nvmf_subsystem_add_listener", 00:32:13.246 "req_id": 1 00:32:13.246 } 00:32:13.246 Got JSON-RPC error response 00:32:13.246 response: 00:32:13.246 { 00:32:13.246 "code": -32602, 00:32:13.246 "message": "Invalid parameters" 00:32:13.246 } 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:13.246 07:29:37 keyring_file -- keyring/file.sh@47 -- # bperfpid=83992 00:32:13.246 07:29:37 keyring_file -- keyring/file.sh@49 -- # waitforlisten 83992 /var/tmp/bperf.sock 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 83992 ']' 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.246 07:29:37 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:13.246 07:29:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:13.246 [2024-11-20 07:29:37.290209] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:32:13.246 [2024-11-20 07:29:37.290281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83992 ] 00:32:13.246 [2024-11-20 07:29:37.423610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.505 [2024-11-20 07:29:37.458214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.505 [2024-11-20 07:29:37.487463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:14.070 07:29:38 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.070 07:29:38 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:14.070 07:29:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nOvKF2OmU2 00:32:14.070 07:29:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nOvKF2OmU2 00:32:14.328 07:29:38 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UclbD1pVUj 00:32:14.328 07:29:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UclbD1pVUj 00:32:14.587 07:29:38 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:14.587 07:29:38 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:32:14.587 07:29:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.587 07:29:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.587 07:29:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:14.587 07:29:38 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.nOvKF2OmU2 == \/\t\m\p\/\t\m\p\.\n\O\v\K\F\2\O\m\U\2 ]] 00:32:14.587 07:29:38 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:32:14.587 07:29:38 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:32:14.587 07:29:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.587 07:29:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.587 07:29:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:14.846 07:29:38 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.UclbD1pVUj == \/\t\m\p\/\t\m\p\.\U\c\l\b\D\1\p\V\U\j ]] 00:32:14.846 07:29:38 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:32:14.846 07:29:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:14.846 07:29:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.846 07:29:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.846 07:29:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:14.846 07:29:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:15.214 07:29:39 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:15.214 07:29:39 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:32:15.214 07:29:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:15.214 07:29:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:15.214 07:29:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:15.214 07:29:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:15.214 07:29:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.472 07:29:39 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:32:15.472 07:29:39 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.472 07:29:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.472 [2024-11-20 07:29:39.649329] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:15.731 nvme0n1 00:32:15.731 07:29:39 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:32:15.731 07:29:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:15.731 07:29:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:15.731 07:29:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:15.731 07:29:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.731 07:29:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:15.990 07:29:39 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:32:15.990 07:29:39 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:32:15.990 07:29:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:15.990 07:29:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:15.990 07:29:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:15.990 07:29:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:15.990 07:29:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.990 07:29:40 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:32:15.990 07:29:40 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:16.248 Running I/O for 1 seconds... 00:32:17.182 21259.00 IOPS, 83.04 MiB/s 00:32:17.182 Latency(us) 00:32:17.182 [2024-11-20T07:29:41.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.182 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:17.182 nvme0n1 : 1.00 21307.76 83.23 0.00 0.00 5997.16 3705.30 10939.47 00:32:17.182 [2024-11-20T07:29:41.385Z] =================================================================================================================== 00:32:17.182 [2024-11-20T07:29:41.385Z] Total : 21307.76 83.23 0.00 0.00 5997.16 3705.30 10939.47 00:32:17.182 { 00:32:17.182 "results": [ 00:32:17.182 { 00:32:17.182 "job": "nvme0n1", 00:32:17.182 "core_mask": "0x2", 00:32:17.182 "workload": "randrw", 00:32:17.182 "percentage": 50, 00:32:17.182 "status": "finished", 00:32:17.182 "queue_depth": 128, 00:32:17.182 "io_size": 4096, 00:32:17.182 "runtime": 1.003719, 00:32:17.182 "iops": 21307.75645374851, 00:32:17.182 "mibps": 83.23342364745511, 00:32:17.182 "io_failed": 0, 00:32:17.182 "io_timeout": 0, 00:32:17.182 "avg_latency_us": 5997.158272566728, 00:32:17.182 "min_latency_us": 3705.3046153846153, 00:32:17.182 "max_latency_us": 10939.47076923077 00:32:17.182 } 00:32:17.182 ], 00:32:17.182 "core_count": 1 00:32:17.182 } 00:32:17.182 07:29:41 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:17.182 07:29:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:17.440 07:29:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:32:17.440 07:29:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:17.440 07:29:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.440 07:29:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:17.440 07:29:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.440 07:29:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.715 07:29:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:17.715 07:29:41 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:32:17.715 07:29:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:17.715 07:29:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.715 07:29:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.715 07:29:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:17.715 07:29:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.715 07:29:41 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:32:17.715 07:29:41 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:17.715 07:29:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:17.715 07:29:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:17.715 07:29:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:17.715 07:29:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:17.715 07:29:41 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:17.715 07:29:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:17.715 07:29:41 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:17.715 07:29:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:17.973 [2024-11-20 07:29:42.073997] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:17.973 [2024-11-20 07:29:42.074584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2503770 (107): Transport endpoint is not connected 00:32:17.973 [2024-11-20 07:29:42.075579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2503770 (9): Bad file descriptor 00:32:17.973 [2024-11-20 07:29:42.076578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:17.973 [2024-11-20 07:29:42.076593] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:17.973 [2024-11-20 07:29:42.076598] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:17.973 [2024-11-20 07:29:42.076603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:17.973 request: 00:32:17.973 { 00:32:17.973 "name": "nvme0", 00:32:17.973 "trtype": "tcp", 00:32:17.973 "traddr": "127.0.0.1", 00:32:17.973 "adrfam": "ipv4", 00:32:17.973 "trsvcid": "4420", 00:32:17.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:17.973 "prchk_reftag": false, 00:32:17.973 "prchk_guard": false, 00:32:17.973 "hdgst": false, 00:32:17.973 "ddgst": false, 00:32:17.973 "psk": "key1", 00:32:17.973 "allow_unrecognized_csi": false, 00:32:17.973 "method": "bdev_nvme_attach_controller", 00:32:17.973 "req_id": 1 00:32:17.973 } 00:32:17.973 Got JSON-RPC error response 00:32:17.973 response: 00:32:17.973 { 00:32:17.973 "code": -5, 00:32:17.973 "message": "Input/output error" 00:32:17.973 } 00:32:17.973 07:29:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:17.973 07:29:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:17.973 07:29:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:17.973 07:29:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:17.973 07:29:42 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:32:17.973 07:29:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.973 07:29:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:17.973 07:29:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.973 07:29:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:17.973 07:29:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.231 07:29:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:18.231 07:29:42 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:32:18.231 07:29:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:18.231 07:29:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:18.231 07:29:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.231 07:29:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:18.231 07:29:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:18.489 07:29:42 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:32:18.489 07:29:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:32:18.489 07:29:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:18.489 07:29:42 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:32:18.489 07:29:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:18.747 07:29:42 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:32:18.747 07:29:42 keyring_file -- keyring/file.sh@78 -- # jq length 00:32:18.747 07:29:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.004 07:29:43 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:32:19.005 07:29:43 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.nOvKF2OmU2 00:32:19.005 07:29:43 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.nOvKF2OmU2 00:32:19.005 07:29:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:19.005 07:29:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.nOvKF2OmU2 00:32:19.005 07:29:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:19.005 07:29:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:19.005 07:29:43 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:19.005 07:29:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:19.005 07:29:43 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nOvKF2OmU2 00:32:19.005 07:29:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nOvKF2OmU2 00:32:19.262 [2024-11-20 07:29:43.245133] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nOvKF2OmU2': 0100660 00:32:19.262 [2024-11-20 07:29:43.245158] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:19.262 request: 00:32:19.262 { 00:32:19.262 "name": "key0", 00:32:19.262 "path": "/tmp/tmp.nOvKF2OmU2", 00:32:19.262 "method": "keyring_file_add_key", 00:32:19.262 "req_id": 1 00:32:19.262 } 00:32:19.262 Got JSON-RPC error response 00:32:19.262 response: 00:32:19.262 { 00:32:19.262 "code": -1, 00:32:19.262 "message": "Operation not permitted" 00:32:19.262 } 00:32:19.262 07:29:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:19.262 07:29:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:19.262 07:29:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:19.262 07:29:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:19.262 07:29:43 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.nOvKF2OmU2 00:32:19.262 07:29:43 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nOvKF2OmU2 00:32:19.262 07:29:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nOvKF2OmU2 00:32:19.262 07:29:43 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.nOvKF2OmU2 00:32:19.520 07:29:43 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:32:19.520 07:29:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:19.520 07:29:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.520 07:29:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.520 07:29:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.520 07:29:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.520 07:29:43 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:32:19.520 07:29:43 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.520 07:29:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:19.520 07:29:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.520 07:29:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:19.520 07:29:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:19.520 07:29:43 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:19.520 07:29:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:19.520 07:29:43 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.520 07:29:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.779 [2024-11-20 07:29:43.853248] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.nOvKF2OmU2': No such file or directory 00:32:19.779 [2024-11-20 07:29:43.853274] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:19.779 [2024-11-20 07:29:43.853287] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:19.779 [2024-11-20 07:29:43.853291] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:32:19.779 [2024-11-20 07:29:43.853296] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:19.779 [2024-11-20 07:29:43.853300] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:19.779 request: 00:32:19.779 { 00:32:19.779 "name": "nvme0", 00:32:19.779 "trtype": "tcp", 00:32:19.779 "traddr": "127.0.0.1", 00:32:19.779 "adrfam": "ipv4", 00:32:19.779 "trsvcid": "4420", 00:32:19.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.779 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:19.779 "prchk_reftag": false, 00:32:19.779 "prchk_guard": false, 00:32:19.779 "hdgst": false, 00:32:19.779 "ddgst": false, 00:32:19.779 "psk": "key0", 00:32:19.779 "allow_unrecognized_csi": false, 00:32:19.779 "method": "bdev_nvme_attach_controller", 00:32:19.779 "req_id": 1 00:32:19.779 } 00:32:19.779 Got JSON-RPC error response 00:32:19.779 response: 00:32:19.779 { 00:32:19.779 "code": -19, 00:32:19.779 "message": "No such device" 00:32:19.779 } 00:32:19.779 07:29:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:19.779 07:29:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:19.779 07:29:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:19.779 07:29:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:19.779 07:29:43 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:32:19.779 07:29:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:20.037 07:29:44 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:20.037 07:29:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:20.037 07:29:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:20.037 07:29:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:20.037 07:29:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:20.037 07:29:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:20.037 07:29:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fkTM8kbUML 00:32:20.037 07:29:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:20.037 07:29:44 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:20.037 07:29:44 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:32:20.037 07:29:44 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:32:20.037 07:29:44 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:32:20.037 07:29:44 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:32:20.037 07:29:44 keyring_file -- nvmf/common.sh@507 -- # python - 00:32:20.037 07:29:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fkTM8kbUML 00:32:20.037 07:29:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fkTM8kbUML 00:32:20.037 07:29:44 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.fkTM8kbUML 00:32:20.037 07:29:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fkTM8kbUML 00:32:20.037 07:29:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fkTM8kbUML 00:32:20.295 07:29:44 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:20.296 07:29:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:20.554 nvme0n1 00:32:20.554 07:29:44 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:32:20.554 07:29:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:20.554 07:29:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:20.554 07:29:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.554 07:29:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:20.554 07:29:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.812 07:29:44 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:32:20.812 07:29:44 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:32:20.812 07:29:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:20.812 07:29:44 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:32:20.812 07:29:44 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:32:20.812 07:29:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.812 07:29:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.812 07:29:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.098 07:29:45 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:32:21.098 07:29:45 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:32:21.098 07:29:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:21.098 07:29:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.098 07:29:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.098 07:29:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.098 07:29:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.355 07:29:45 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:32:21.355 07:29:45 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:21.355 07:29:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:21.613 07:29:45 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:32:21.613 07:29:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.613 07:29:45 keyring_file -- keyring/file.sh@105 -- # jq length 00:32:21.613 07:29:45 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:32:21.613 07:29:45 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fkTM8kbUML 00:32:21.613 07:29:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fkTM8kbUML 00:32:21.871 07:29:45 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UclbD1pVUj 00:32:21.871 07:29:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UclbD1pVUj 00:32:22.128 07:29:46 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:22.128 07:29:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:22.386 nvme0n1 00:32:22.387 07:29:46 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:32:22.387 07:29:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:22.646 07:29:46 keyring_file -- keyring/file.sh@113 -- # config='{ 00:32:22.646 "subsystems": [ 00:32:22.646 { 00:32:22.646 "subsystem": "keyring", 00:32:22.646 "config": [ 00:32:22.646 { 00:32:22.646 "method": "keyring_file_add_key", 00:32:22.646 "params": { 00:32:22.646 "name": "key0", 00:32:22.646 "path": "/tmp/tmp.fkTM8kbUML" 00:32:22.646 } 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "method": "keyring_file_add_key", 00:32:22.646 "params": { 00:32:22.646 "name": "key1", 00:32:22.646 "path": "/tmp/tmp.UclbD1pVUj" 00:32:22.646 } 00:32:22.646 } 00:32:22.646 ] 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "subsystem": "iobuf", 00:32:22.646 "config": [ 00:32:22.646 { 00:32:22.646 "method": "iobuf_set_options", 00:32:22.646 "params": { 00:32:22.646 "small_pool_count": 8192, 00:32:22.646 "large_pool_count": 1024, 00:32:22.646 "small_bufsize": 8192, 00:32:22.646 "large_bufsize": 135168, 00:32:22.646 "enable_numa": false 00:32:22.646 } 00:32:22.646 } 00:32:22.646 ] 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "subsystem": "sock", 00:32:22.646 "config": [ 00:32:22.646 { 00:32:22.646 "method": "sock_set_default_impl", 00:32:22.646 "params": { 00:32:22.646 "impl_name": "uring" 00:32:22.646 } 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "method": "sock_impl_set_options", 00:32:22.646 "params": { 00:32:22.646 "impl_name": "ssl", 00:32:22.646 "recv_buf_size": 4096, 00:32:22.646 "send_buf_size": 4096, 00:32:22.646 "enable_recv_pipe": true, 00:32:22.646 "enable_quickack": false, 00:32:22.646 "enable_placement_id": 0, 00:32:22.646 "enable_zerocopy_send_server": true, 00:32:22.646 "enable_zerocopy_send_client": false, 00:32:22.646 "zerocopy_threshold": 0, 00:32:22.646 "tls_version": 0, 00:32:22.646 "enable_ktls": false 00:32:22.646 } 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "method": "sock_impl_set_options", 00:32:22.646 "params": { 00:32:22.646 "impl_name": "posix", 00:32:22.646 "recv_buf_size": 2097152, 00:32:22.646 "send_buf_size": 2097152, 00:32:22.646 "enable_recv_pipe": true, 00:32:22.646 "enable_quickack": false, 00:32:22.646 "enable_placement_id": 0, 00:32:22.646 "enable_zerocopy_send_server": true, 00:32:22.646 "enable_zerocopy_send_client": false, 00:32:22.646 "zerocopy_threshold": 0, 00:32:22.646 "tls_version": 0, 00:32:22.646 "enable_ktls": false 00:32:22.646 } 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "method": "sock_impl_set_options", 00:32:22.646 "params": { 00:32:22.646 "impl_name": "uring", 00:32:22.646 "recv_buf_size": 2097152, 00:32:22.646 "send_buf_size": 2097152, 00:32:22.646 "enable_recv_pipe": true, 00:32:22.646 "enable_quickack": false, 00:32:22.646 "enable_placement_id": 0, 00:32:22.646 "enable_zerocopy_send_server": false, 00:32:22.646 "enable_zerocopy_send_client": false, 00:32:22.646 "zerocopy_threshold": 0, 00:32:22.646 "tls_version": 0, 00:32:22.646 "enable_ktls": false 00:32:22.646 } 00:32:22.646 } 00:32:22.646 ] 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "subsystem": "vmd", 00:32:22.646 "config": [] 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "subsystem": "accel", 00:32:22.646 "config": [ 00:32:22.646 { 00:32:22.646 "method": "accel_set_options", 00:32:22.646 "params": { 00:32:22.646 "small_cache_size": 128, 00:32:22.646 "large_cache_size": 16, 00:32:22.646 "task_count": 2048, 00:32:22.646 "sequence_count": 2048, 00:32:22.646 "buf_count": 2048 00:32:22.646 } 00:32:22.646 } 00:32:22.646 ] 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "subsystem": "bdev", 00:32:22.646 "config": [ 00:32:22.646 { 00:32:22.646 "method": "bdev_set_options", 00:32:22.646 "params": { 00:32:22.646 "bdev_io_pool_size": 65535, 00:32:22.646 "bdev_io_cache_size": 256, 00:32:22.646 "bdev_auto_examine": true, 00:32:22.646 "iobuf_small_cache_size": 128, 00:32:22.646 "iobuf_large_cache_size": 16 00:32:22.646 } 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "method": "bdev_raid_set_options", 00:32:22.646 "params": { 00:32:22.646 "process_window_size_kb": 1024, 00:32:22.646 "process_max_bandwidth_mb_sec": 0 00:32:22.646 } 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "method": "bdev_iscsi_set_options", 00:32:22.646 "params": { 00:32:22.646 "timeout_sec": 30 00:32:22.646 } 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "method": "bdev_nvme_set_options", 00:32:22.646 "params": { 00:32:22.646 "action_on_timeout": "none", 00:32:22.646 "timeout_us": 0, 00:32:22.646 "timeout_admin_us": 0, 00:32:22.646 "keep_alive_timeout_ms": 10000, 00:32:22.646 "arbitration_burst": 0, 00:32:22.646 "low_priority_weight": 0, 00:32:22.646 "medium_priority_weight": 0, 00:32:22.646 "high_priority_weight": 0, 00:32:22.646 "nvme_adminq_poll_period_us": 10000, 00:32:22.646 "nvme_ioq_poll_period_us": 0, 00:32:22.646 "io_queue_requests": 512, 00:32:22.646 "delay_cmd_submit": true, 00:32:22.646 "transport_retry_count": 4, 00:32:22.646 "bdev_retry_count": 3, 00:32:22.646 "transport_ack_timeout": 0, 00:32:22.646 "ctrlr_loss_timeout_sec": 0, 00:32:22.646 "reconnect_delay_sec": 0, 00:32:22.646 "fast_io_fail_timeout_sec": 0, 00:32:22.646 "disable_auto_failback": false, 00:32:22.646 "generate_uuids": false, 00:32:22.646 "transport_tos": 0, 00:32:22.646 "nvme_error_stat": false, 00:32:22.646 "rdma_srq_size": 0, 00:32:22.646 "io_path_stat": false, 00:32:22.646 "allow_accel_sequence": false, 00:32:22.646 "rdma_max_cq_size": 0, 00:32:22.646 "rdma_cm_event_timeout_ms": 0, 00:32:22.646 "dhchap_digests": [ 00:32:22.646 "sha256", 00:32:22.646 "sha384", 00:32:22.646 "sha512" 00:32:22.646 ], 00:32:22.646 "dhchap_dhgroups": [ 00:32:22.646 "null", 00:32:22.646 "ffdhe2048", 00:32:22.646 "ffdhe3072", 00:32:22.646 "ffdhe4096", 00:32:22.646 "ffdhe6144", 00:32:22.646 "ffdhe8192" 00:32:22.646 ] 00:32:22.646 } 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "method": "bdev_nvme_attach_controller", 00:32:22.646 "params": { 00:32:22.646 "name": "nvme0", 00:32:22.646 "trtype": "TCP", 00:32:22.646 "adrfam": "IPv4", 00:32:22.646 "traddr": "127.0.0.1", 00:32:22.646 "trsvcid": "4420", 00:32:22.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.646 "prchk_reftag": false, 00:32:22.646 "prchk_guard": false, 00:32:22.646 "ctrlr_loss_timeout_sec": 0, 00:32:22.646 "reconnect_delay_sec": 0, 00:32:22.646 "fast_io_fail_timeout_sec": 0, 00:32:22.646 "psk": "key0", 00:32:22.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:22.646 "hdgst": false, 00:32:22.646 "ddgst": false, 00:32:22.646 "multipath": "multipath" 00:32:22.646 } 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "method": "bdev_nvme_set_hotplug", 00:32:22.646 "params": { 00:32:22.646 "period_us": 100000, 00:32:22.646 "enable": false 00:32:22.646 } 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "method": "bdev_wait_for_examine" 00:32:22.646 } 00:32:22.646 ] 00:32:22.646 }, 00:32:22.646 { 00:32:22.646 "subsystem": "nbd", 00:32:22.646 "config": [] 00:32:22.646 } 00:32:22.646 ] 00:32:22.646 }' 00:32:22.646 07:29:46 keyring_file -- keyring/file.sh@115 -- # killprocess 83992 00:32:22.646 07:29:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 83992 ']' 00:32:22.646 07:29:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 83992 00:32:22.646 07:29:46 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:22.646 07:29:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:22.646 07:29:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83992 00:32:22.647 07:29:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:22.647 07:29:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:22.647 07:29:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83992' 00:32:22.647 killing process with pid 83992 00:32:22.647 07:29:46 keyring_file -- common/autotest_common.sh@973 -- # kill 83992 00:32:22.647 Received shutdown signal, test time was about 1.000000 seconds 00:32:22.647 00:32:22.647 Latency(us) 00:32:22.647 [2024-11-20T07:29:46.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.647 [2024-11-20T07:29:46.850Z] =================================================================================================================== 00:32:22.647 [2024-11-20T07:29:46.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:22.647 07:29:46 keyring_file -- common/autotest_common.sh@978 -- # wait 83992 00:32:22.647 07:29:46 keyring_file -- keyring/file.sh@118 -- # bperfpid=84225 00:32:22.647 07:29:46 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:22.647 07:29:46 keyring_file -- keyring/file.sh@120 -- # waitforlisten 84225 /var/tmp/bperf.sock 00:32:22.647 07:29:46 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84225 ']' 00:32:22.647 07:29:46 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:22.647 07:29:46 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:22.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:22.647 07:29:46 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:22.647 07:29:46 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:22.647 07:29:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:22.647 07:29:46 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:32:22.647 "subsystems": [ 00:32:22.647 { 00:32:22.647 "subsystem": "keyring", 00:32:22.647 "config": [ 00:32:22.647 { 00:32:22.647 "method": "keyring_file_add_key", 00:32:22.647 "params": { 00:32:22.647 "name": "key0", 00:32:22.647 "path": "/tmp/tmp.fkTM8kbUML" 00:32:22.647 } 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "method": "keyring_file_add_key", 00:32:22.647 "params": { 00:32:22.647 "name": "key1", 00:32:22.647 "path": "/tmp/tmp.UclbD1pVUj" 00:32:22.647 } 00:32:22.647 } 00:32:22.647 ] 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "subsystem": "iobuf", 00:32:22.647 "config": [ 00:32:22.647 { 00:32:22.647 "method": "iobuf_set_options", 00:32:22.647 "params": { 00:32:22.647 "small_pool_count": 8192, 00:32:22.647 "large_pool_count": 1024, 00:32:22.647 "small_bufsize": 8192, 00:32:22.647 "large_bufsize": 135168, 00:32:22.647 "enable_numa": false 00:32:22.647 } 00:32:22.647 } 00:32:22.647 ] 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "subsystem": "sock", 00:32:22.647 "config": [ 00:32:22.647 { 00:32:22.647 "method": "sock_set_default_impl", 00:32:22.647 "params": { 00:32:22.647 "impl_name": "uring" 00:32:22.647 } 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "method": "sock_impl_set_options", 00:32:22.647 "params": { 00:32:22.647 "impl_name": "ssl", 00:32:22.647 "recv_buf_size": 4096, 00:32:22.647 "send_buf_size": 4096, 00:32:22.647 "enable_recv_pipe": true, 00:32:22.647 "enable_quickack": false, 00:32:22.647 "enable_placement_id": 0, 00:32:22.647 "enable_zerocopy_send_server": true, 00:32:22.647 "enable_zerocopy_send_client": false, 00:32:22.647 "zerocopy_threshold": 0, 00:32:22.647 "tls_version": 0, 00:32:22.647 "enable_ktls": false 00:32:22.647 } 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "method": "sock_impl_set_options", 00:32:22.647 "params": { 00:32:22.647 "impl_name": "posix", 00:32:22.647 "recv_buf_size": 2097152, 00:32:22.647 "send_buf_size": 2097152, 00:32:22.647 "enable_recv_pipe": true, 00:32:22.647 "enable_quickack": false, 00:32:22.647 "enable_placement_id": 0, 00:32:22.647 "enable_zerocopy_send_server": true, 00:32:22.647 "enable_zerocopy_send_client": false, 00:32:22.647 "zerocopy_threshold": 0, 00:32:22.647 "tls_version": 0, 00:32:22.647 "enable_ktls": false 00:32:22.647 } 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "method": "sock_impl_set_options", 00:32:22.647 "params": { 00:32:22.647 "impl_name": "uring", 00:32:22.647 "recv_buf_size": 2097152, 00:32:22.647 "send_buf_size": 2097152, 00:32:22.647 "enable_recv_pipe": true, 00:32:22.647 "enable_quickack": false, 00:32:22.647 "enable_placement_id": 0, 00:32:22.647 "enable_zerocopy_send_server": false, 00:32:22.647 "enable_zerocopy_send_client": false, 00:32:22.647 "zerocopy_threshold": 0, 00:32:22.647 "tls_version": 0, 00:32:22.647 "enable_ktls": false 00:32:22.647 } 00:32:22.647 } 00:32:22.647 ] 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "subsystem": "vmd", 00:32:22.647 "config": [] 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "subsystem": "accel", 00:32:22.647 "config": [ 00:32:22.647 { 00:32:22.647 "method": "accel_set_options", 00:32:22.647 "params": { 00:32:22.647 "small_cache_size": 128, 00:32:22.647 "large_cache_size": 16, 00:32:22.647 "task_count": 2048, 00:32:22.647 "sequence_count": 2048, 00:32:22.647 "buf_count": 2048 00:32:22.647 } 00:32:22.647 } 00:32:22.647 ] 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "subsystem": "bdev", 00:32:22.647 "config": [ 00:32:22.647 { 00:32:22.647 "method": "bdev_set_options", 00:32:22.647 "params": { 00:32:22.647 "bdev_io_pool_size": 65535, 00:32:22.647 "bdev_io_cache_size": 256, 00:32:22.647 "bdev_auto_examine": true, 00:32:22.647 "iobuf_small_cache_size": 128, 00:32:22.647 "iobuf_large_cache_size": 16 00:32:22.647 } 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "method": "bdev_raid_set_options", 00:32:22.647 "params": { 00:32:22.647 "process_window_size_kb": 1024, 00:32:22.647 "process_max_bandwidth_mb_sec": 0 00:32:22.647 } 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "method": "bdev_iscsi_set_options", 00:32:22.647 "params": { 00:32:22.647 "timeout_sec": 30 00:32:22.647 } 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "method": "bdev_nvme_set_options", 00:32:22.647 "params": { 00:32:22.647 "action_on_timeout": "none", 00:32:22.647 "timeout_us": 0, 00:32:22.647 "timeout_admin_us": 0, 00:32:22.647 "keep_alive_timeout_ms": 10000, 00:32:22.647 "arbitration_burst": 0, 00:32:22.647 "low_priority_weight": 0, 00:32:22.647 "medium_priority_weight": 0, 00:32:22.647 "high_priority_weight": 0, 00:32:22.647 "nvme_adminq_poll_period_us": 10000, 00:32:22.647 "nvme_ioq_poll_period_us": 0, 00:32:22.647 "io_queue_requests": 512, 00:32:22.647 "delay_cmd_submit": true, 00:32:22.647 "transport_retry_count": 4, 00:32:22.647 "bdev_retry_count": 3, 00:32:22.647 "transport_ack_timeout": 0, 00:32:22.647 "ctrlr_loss_timeout_sec": 0, 00:32:22.647 "reconnect_delay_sec": 0, 00:32:22.647 "fast_io_fail_timeout_sec": 0, 00:32:22.647 "disable_auto_failback": false, 00:32:22.647 "generate_uuids": false, 00:32:22.647 "transport_tos": 0, 00:32:22.647 "nvme_error_stat": false, 00:32:22.647 "rdma_srq_size": 0, 00:32:22.647 "io_path_stat": false, 00:32:22.647 "allow_accel_sequence": false, 00:32:22.647 "rdma_max_cq_size": 0, 00:32:22.647 "rdma_cm_event_timeout_ms": 0, 00:32:22.647 "dhchap_digests": [ 00:32:22.647 "sha256", 00:32:22.647 "sha384", 00:32:22.647 "sha512" 00:32:22.647 ], 00:32:22.647 "dhchap_dhgroups": [ 00:32:22.647 "null", 00:32:22.647 "ffdhe2048", 00:32:22.647 "ffdhe3072", 00:32:22.647 "ffdhe4096", 00:32:22.647 "ffdhe6144", 00:32:22.647 "ffdhe8192" 00:32:22.647 ] 00:32:22.647 } 00:32:22.647 }, 00:32:22.647 { 00:32:22.647 "method": "bdev_nvme_attach_controller", 00:32:22.647 "params": { 00:32:22.647 "name": "nvme0", 00:32:22.647 "trtype": "TCP", 00:32:22.647 "adrfam": "IPv4", 00:32:22.647 "traddr": "127.0.0.1", 00:32:22.647 "trsvcid": "4420", 00:32:22.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.648 "prchk_reftag": false, 00:32:22.648 "prchk_guard": false, 00:32:22.648 "ctrlr_loss_timeout_sec": 0, 00:32:22.648 "reconnect_delay_sec": 0, 00:32:22.648 "fast_io_fail_timeout_sec": 0, 00:32:22.648 "psk": "key0", 00:32:22.648 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:22.648 "hdgst": false, 00:32:22.648 "ddgst": false, 00:32:22.648 "multipath": "multipath" 00:32:22.648 } 00:32:22.648 }, 00:32:22.648 { 00:32:22.648 "method": "bdev_nvme_set_hotplug", 00:32:22.648 "params": { 00:32:22.648 "period_us": 100000, 00:32:22.648 "enable": false 00:32:22.648 } 00:32:22.648 }, 00:32:22.648 { 00:32:22.648 "method": "bdev_wait_for_examine" 00:32:22.648 } 00:32:22.648 ] 00:32:22.648 }, 00:32:22.648 { 00:32:22.648 "subsystem": "nbd", 00:32:22.648 "config": [] 00:32:22.648 } 00:32:22.648 ] 00:32:22.648 }' 00:32:22.906 [2024-11-20 07:29:46.860903] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:32:22.906 [2024-11-20 07:29:46.860955] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84225 ] 00:32:22.906 [2024-11-20 07:29:46.990972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.906 [2024-11-20 07:29:47.021255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.164 [2024-11-20 07:29:47.129568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:23.164 [2024-11-20 07:29:47.170680] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:23.732 07:29:47 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:23.732 07:29:47 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:23.732 07:29:47 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:32:23.732 07:29:47 keyring_file -- keyring/file.sh@121 -- # jq length 00:32:23.732 07:29:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.991 07:29:47 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:23.991 07:29:47 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:32:23.991 07:29:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:23.991 07:29:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:23.991 07:29:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:23.991 07:29:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.991 07:29:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.991 07:29:48 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:32:23.991 07:29:48 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:32:23.991 07:29:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:23.991 07:29:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:23.991 07:29:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.991 07:29:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.991 07:29:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:24.249 07:29:48 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:32:24.249 07:29:48 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:32:24.249 07:29:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:24.249 07:29:48 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:32:24.507 07:29:48 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:32:24.507 07:29:48 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:24.507 07:29:48 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.fkTM8kbUML /tmp/tmp.UclbD1pVUj 00:32:24.507 07:29:48 keyring_file -- keyring/file.sh@20 -- # killprocess 84225 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84225 ']' 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84225 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84225 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:24.507 killing process with pid 84225 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84225' 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@973 -- # kill 84225 00:32:24.507 Received shutdown signal, test time was about 1.000000 seconds 00:32:24.507 00:32:24.507 Latency(us) 00:32:24.507 [2024-11-20T07:29:48.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.507 [2024-11-20T07:29:48.710Z] =================================================================================================================== 00:32:24.507 [2024-11-20T07:29:48.710Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@978 -- # wait 84225 00:32:24.507 07:29:48 keyring_file -- keyring/file.sh@21 -- # killprocess 83975 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 83975 ']' 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@958 -- # kill -0 83975 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.507 07:29:48 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83975 00:32:24.766 07:29:48 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:24.766 07:29:48 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:24.766 07:29:48 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83975' 00:32:24.766 killing process with pid 83975 00:32:24.766 07:29:48 keyring_file -- common/autotest_common.sh@973 -- # kill 83975 00:32:24.766 07:29:48 keyring_file -- common/autotest_common.sh@978 -- # wait 83975 00:32:24.766 00:32:24.766 real 0m12.855s 00:32:24.766 user 0m31.797s 00:32:24.766 sys 0m2.061s 00:32:24.766 07:29:48 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.766 07:29:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:24.766 ************************************ 00:32:24.766 END TEST keyring_file 00:32:24.766 ************************************ 00:32:24.766 07:29:48 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:32:24.766 07:29:48 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:32:24.766 07:29:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:24.766 07:29:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.766 07:29:48 -- common/autotest_common.sh@10 -- # set +x 00:32:24.766 ************************************ 00:32:24.766 START TEST keyring_linux 00:32:24.766 ************************************ 00:32:24.766 07:29:48 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:32:24.766 Joined session keyring: 218372806 00:32:25.025 * Looking for test storage... 00:32:25.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:32:25.025 07:29:49 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.025 07:29:49 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.025 07:29:49 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.025 07:29:49 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@345 -- # : 1 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@368 -- # return 0 00:32:25.025 07:29:49 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.025 07:29:49 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.025 --rc genhtml_branch_coverage=1 00:32:25.025 --rc genhtml_function_coverage=1 00:32:25.025 --rc genhtml_legend=1 00:32:25.025 --rc geninfo_all_blocks=1 00:32:25.025 --rc geninfo_unexecuted_blocks=1 00:32:25.025 00:32:25.025 ' 00:32:25.025 07:29:49 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.025 --rc genhtml_branch_coverage=1 00:32:25.025 --rc genhtml_function_coverage=1 00:32:25.025 --rc genhtml_legend=1 00:32:25.025 --rc geninfo_all_blocks=1 00:32:25.025 --rc geninfo_unexecuted_blocks=1 00:32:25.025 00:32:25.025 ' 00:32:25.025 07:29:49 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.025 --rc genhtml_branch_coverage=1 00:32:25.025 --rc genhtml_function_coverage=1 00:32:25.025 --rc genhtml_legend=1 00:32:25.025 --rc geninfo_all_blocks=1 00:32:25.025 --rc geninfo_unexecuted_blocks=1 00:32:25.025 00:32:25.025 ' 00:32:25.025 07:29:49 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.025 --rc genhtml_branch_coverage=1 00:32:25.025 --rc genhtml_function_coverage=1 00:32:25.025 --rc genhtml_legend=1 00:32:25.025 --rc geninfo_all_blocks=1 00:32:25.025 --rc geninfo_unexecuted_blocks=1 00:32:25.025 00:32:25.025 ' 00:32:25.025 07:29:49 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:32:25.025 07:29:49 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6878406f-1821-4d15-bee4-f9cf994eb227 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@16 -- # NVME_HOSTID=6878406f-1821-4d15-bee4-f9cf994eb227 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.025 07:29:49 keyring_linux -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.025 07:29:49 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.026 07:29:49 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.026 07:29:49 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.026 07:29:49 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.026 07:29:49 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:25.026 07:29:49 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:32:25.026 07:29:49 keyring_linux -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:25.026 07:29:49 keyring_linux -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:25.026 07:29:49 keyring_linux -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@50 -- # : 0 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:25.026 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:25.026 07:29:49 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:25.026 07:29:49 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:25.026 07:29:49 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:25.026 07:29:49 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:25.026 07:29:49 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:25.026 07:29:49 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@507 -- # python - 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:25.026 /tmp/:spdk-test:key0 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:25.026 07:29:49 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:32:25.026 07:29:49 keyring_linux -- nvmf/common.sh@507 -- # python - 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:25.026 /tmp/:spdk-test:key1 00:32:25.026 07:29:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:25.026 07:29:49 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84341 00:32:25.026 07:29:49 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:25.026 07:29:49 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84341 00:32:25.026 07:29:49 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 84341 ']' 00:32:25.026 07:29:49 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.026 07:29:49 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.026 07:29:49 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.026 07:29:49 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.026 07:29:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:25.026 [2024-11-20 07:29:49.222864] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:32:25.026 [2024-11-20 07:29:49.223239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84341 ] 00:32:25.284 [2024-11-20 07:29:49.352717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.284 [2024-11-20 07:29:49.382434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.284 [2024-11-20 07:29:49.421173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:26.217 07:29:50 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:26.217 07:29:50 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:32:26.217 07:29:50 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:26.217 07:29:50 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.217 07:29:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:26.217 [2024-11-20 07:29:50.071778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.217 null0 00:32:26.217 [2024-11-20 07:29:50.103754] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:26.217 [2024-11-20 07:29:50.103862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:26.217 07:29:50 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.217 07:29:50 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:26.217 637622318 00:32:26.217 07:29:50 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:26.217 1031476374 00:32:26.217 07:29:50 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84359 00:32:26.217 07:29:50 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84359 /var/tmp/bperf.sock 00:32:26.217 07:29:50 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 84359 ']' 00:32:26.217 07:29:50 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:26.217 07:29:50 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:26.217 07:29:50 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:26.217 07:29:50 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:26.217 07:29:50 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.217 07:29:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:26.217 [2024-11-20 07:29:50.169285] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:32:26.217 [2024-11-20 07:29:50.169340] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84359 ] 00:32:26.217 [2024-11-20 07:29:50.308396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.217 [2024-11-20 07:29:50.344030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.150 07:29:51 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.150 07:29:51 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:32:27.150 07:29:51 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:27.150 07:29:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:27.150 07:29:51 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:27.150 07:29:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:27.409 [2024-11-20 07:29:51.450875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:27.409 07:29:51 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:27.409 07:29:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:27.668 [2024-11-20 07:29:51.679545] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:27.668 nvme0n1 00:32:27.668 07:29:51 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:27.668 07:29:51 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:27.668 07:29:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:27.668 07:29:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:27.668 07:29:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.668 07:29:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:27.957 07:29:51 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:27.957 07:29:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:27.957 07:29:51 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:27.957 07:29:51 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:27.957 07:29:51 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:27.957 07:29:51 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:27.957 07:29:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.242 07:29:52 keyring_linux -- keyring/linux.sh@25 -- # sn=637622318 00:32:28.242 07:29:52 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:28.242 07:29:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:28.242 07:29:52 keyring_linux -- keyring/linux.sh@26 -- # [[ 637622318 == \6\3\7\6\2\2\3\1\8 ]] 00:32:28.242 07:29:52 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 637622318 00:32:28.242 07:29:52 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:28.242 07:29:52 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:28.242 Running I/O for 1 seconds... 00:32:29.180 23939.00 IOPS, 93.51 MiB/s 00:32:29.180 Latency(us) 00:32:29.180 [2024-11-20T07:29:53.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.180 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:29.180 nvme0n1 : 1.01 23934.88 93.50 0.00 0.00 5331.29 1827.45 6704.84 00:32:29.180 [2024-11-20T07:29:53.383Z] =================================================================================================================== 00:32:29.180 [2024-11-20T07:29:53.383Z] Total : 23934.88 93.50 0.00 0.00 5331.29 1827.45 6704.84 00:32:29.180 { 00:32:29.180 "results": [ 00:32:29.180 { 00:32:29.180 "job": "nvme0n1", 00:32:29.180 "core_mask": "0x2", 00:32:29.180 "workload": "randread", 00:32:29.180 "status": "finished", 00:32:29.180 "queue_depth": 128, 00:32:29.180 "io_size": 4096, 00:32:29.180 "runtime": 1.00552, 00:32:29.180 "iops": 23934.87946535126, 00:32:29.180 "mibps": 93.49562291152836, 00:32:29.180 "io_failed": 0, 00:32:29.180 "io_timeout": 0, 00:32:29.180 "avg_latency_us": 5331.292414829115, 00:32:29.180 "min_latency_us": 1827.446153846154, 00:32:29.180 "max_latency_us": 6704.836923076923 00:32:29.180 } 00:32:29.180 ], 00:32:29.180 "core_count": 1 00:32:29.180 } 00:32:29.180 07:29:53 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:29.180 07:29:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:29.438 07:29:53 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:29.438 07:29:53 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:29.438 07:29:53 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:29.438 07:29:53 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:29.438 07:29:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.438 07:29:53 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:29.696 07:29:53 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:29.696 07:29:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:29.696 07:29:53 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:29.696 07:29:53 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:29.696 07:29:53 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:32:29.696 07:29:53 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:29.696 07:29:53 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:29.696 07:29:53 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:29.696 07:29:53 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:29.696 07:29:53 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:29.696 07:29:53 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:29.696 07:29:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:29.954 [2024-11-20 07:29:53.914792] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:29.954 [2024-11-20 07:29:53.915243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fda5d0 (107): Transport endpoint is not connected 00:32:29.954 [2024-11-20 07:29:53.916217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fda5d0 (9): Bad file descriptor 00:32:29.954 [2024-11-20 07:29:53.917215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:29.954 [2024-11-20 07:29:53.917268] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:29.954 [2024-11-20 07:29:53.917298] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:29.954 [2024-11-20 07:29:53.917331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:29.954 request: 00:32:29.954 { 00:32:29.954 "name": "nvme0", 00:32:29.954 "trtype": "tcp", 00:32:29.954 "traddr": "127.0.0.1", 00:32:29.954 "adrfam": "ipv4", 00:32:29.954 "trsvcid": "4420", 00:32:29.954 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:29.954 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:29.954 "prchk_reftag": false, 00:32:29.954 "prchk_guard": false, 00:32:29.954 "hdgst": false, 00:32:29.954 "ddgst": false, 00:32:29.954 "psk": ":spdk-test:key1", 00:32:29.954 "allow_unrecognized_csi": false, 00:32:29.954 "method": "bdev_nvme_attach_controller", 00:32:29.955 "req_id": 1 00:32:29.955 } 00:32:29.955 Got JSON-RPC error response 00:32:29.955 response: 00:32:29.955 { 00:32:29.955 "code": -5, 00:32:29.955 "message": "Input/output error" 00:32:29.955 } 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@33 -- # sn=637622318 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 637622318 00:32:29.955 1 links removed 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@33 -- # sn=1031476374 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1031476374 00:32:29.955 1 links removed 00:32:29.955 07:29:53 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84359 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 84359 ']' 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 84359 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84359 00:32:29.955 killing process with pid 84359 00:32:29.955 Received shutdown signal, test time was about 1.000000 seconds 00:32:29.955 00:32:29.955 Latency(us) 00:32:29.955 [2024-11-20T07:29:54.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.955 [2024-11-20T07:29:54.158Z] =================================================================================================================== 00:32:29.955 [2024-11-20T07:29:54.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84359' 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@973 -- # kill 84359 00:32:29.955 07:29:53 keyring_linux -- common/autotest_common.sh@978 -- # wait 84359 00:32:29.955 07:29:54 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84341 00:32:29.955 07:29:54 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 84341 ']' 00:32:29.955 07:29:54 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 84341 00:32:29.955 07:29:54 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:32:29.955 07:29:54 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.955 07:29:54 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84341 00:32:29.955 killing process with pid 84341 00:32:29.955 07:29:54 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.955 07:29:54 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.955 07:29:54 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84341' 00:32:29.955 07:29:54 keyring_linux -- common/autotest_common.sh@973 -- # kill 84341 00:32:29.955 07:29:54 keyring_linux -- common/autotest_common.sh@978 -- # wait 84341 00:32:30.213 00:32:30.213 real 0m5.339s 00:32:30.213 user 0m10.304s 00:32:30.213 sys 0m1.171s 00:32:30.213 07:29:54 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.213 ************************************ 00:32:30.213 END TEST keyring_linux 00:32:30.213 ************************************ 00:32:30.213 07:29:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:30.213 07:29:54 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:30.213 07:29:54 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:30.213 07:29:54 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:30.213 07:29:54 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:30.213 07:29:54 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:30.213 07:29:54 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:30.213 07:29:54 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:30.213 07:29:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:30.213 07:29:54 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:30.213 07:29:54 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:30.213 07:29:54 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:30.213 07:29:54 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:30.213 07:29:54 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:30.213 07:29:54 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:30.213 07:29:54 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:30.213 07:29:54 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:30.213 07:29:54 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:30.213 07:29:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:30.213 07:29:54 -- common/autotest_common.sh@10 -- # set +x 00:32:30.213 07:29:54 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:30.213 07:29:54 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:30.213 07:29:54 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:30.213 07:29:54 -- common/autotest_common.sh@10 -- # set +x 00:32:31.587 INFO: APP EXITING 00:32:31.587 INFO: killing all VMs 00:32:31.587 INFO: killing vhost app 00:32:31.587 INFO: EXIT DONE 00:32:32.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:32.154 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:32.154 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:32.721 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:32.721 Cleaning 00:32:32.721 Removing: /var/run/dpdk/spdk0/config 00:32:32.721 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:32.721 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:32.721 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:32.721 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:32.721 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:32.721 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:32.721 Removing: /var/run/dpdk/spdk1/config 00:32:32.721 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:32.721 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:32.721 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:32.721 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:32.721 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:32.721 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:32.721 Removing: /var/run/dpdk/spdk2/config 00:32:32.721 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:32.721 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:32.721 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:32.721 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:32.721 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:32.721 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:32.721 Removing: /var/run/dpdk/spdk3/config 00:32:32.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:32.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:32.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:32.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:32.721 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:32.721 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:32.721 Removing: /var/run/dpdk/spdk4/config 00:32:32.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:32.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:32.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:32.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:32.721 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:32.721 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:32.721 Removing: /dev/shm/nvmf_trace.0 00:32:32.721 Removing: /dev/shm/spdk_tgt_trace.pid56089 00:32:32.721 Removing: /var/run/dpdk/spdk0 00:32:32.721 Removing: /var/run/dpdk/spdk1 00:32:32.721 Removing: /var/run/dpdk/spdk2 00:32:32.721 Removing: /var/run/dpdk/spdk3 00:32:32.721 Removing: /var/run/dpdk/spdk4 00:32:32.721 Removing: /var/run/dpdk/spdk_pid55947 00:32:32.721 Removing: /var/run/dpdk/spdk_pid56089 00:32:32.721 Removing: /var/run/dpdk/spdk_pid56290 00:32:32.721 Removing: /var/run/dpdk/spdk_pid56376 00:32:32.721 Removing: /var/run/dpdk/spdk_pid56398 00:32:32.721 Removing: /var/run/dpdk/spdk_pid56502 00:32:32.721 Removing: /var/run/dpdk/spdk_pid56520 00:32:32.721 Removing: /var/run/dpdk/spdk_pid56654 00:32:32.721 Removing: /var/run/dpdk/spdk_pid56843 00:32:32.721 Removing: /var/run/dpdk/spdk_pid56987 00:32:32.721 Removing: /var/run/dpdk/spdk_pid57060 00:32:32.721 Removing: /var/run/dpdk/spdk_pid57138 00:32:32.721 Removing: /var/run/dpdk/spdk_pid57232 00:32:32.721 Removing: /var/run/dpdk/spdk_pid57311 00:32:32.721 Removing: /var/run/dpdk/spdk_pid57344 00:32:32.721 Removing: /var/run/dpdk/spdk_pid57380 00:32:32.721 Removing: /var/run/dpdk/spdk_pid57449 00:32:32.721 Removing: /var/run/dpdk/spdk_pid57521 00:32:32.721 Removing: /var/run/dpdk/spdk_pid57940 00:32:32.721 Removing: /var/run/dpdk/spdk_pid57986 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58032 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58048 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58098 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58110 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58165 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58181 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58221 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58239 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58279 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58297 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58422 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58452 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58540 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58863 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58875 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58906 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58914 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58935 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58954 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58962 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58983 00:32:32.721 Removing: /var/run/dpdk/spdk_pid58991 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59010 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59020 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59039 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59053 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59068 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59087 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59095 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59111 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59124 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59143 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59153 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59189 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59197 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59227 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59293 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59322 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59331 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59354 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59369 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59371 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59408 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59427 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59450 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59460 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59469 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59474 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59484 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59493 00:32:32.721 Removing: /var/run/dpdk/spdk_pid59497 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59507 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59535 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59562 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59571 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59594 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59604 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59611 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59646 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59658 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59684 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59692 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59699 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59701 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59709 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59716 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59718 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59730 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59802 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59850 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59957 00:32:32.981 Removing: /var/run/dpdk/spdk_pid59990 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60024 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60044 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60056 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60075 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60107 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60122 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60195 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60211 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60249 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60306 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60346 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60369 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60463 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60506 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60538 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60765 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60857 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60880 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60909 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60943 00:32:32.981 Removing: /var/run/dpdk/spdk_pid60971 00:32:32.981 Removing: /var/run/dpdk/spdk_pid61010 00:32:32.981 Removing: /var/run/dpdk/spdk_pid61035 00:32:32.981 Removing: /var/run/dpdk/spdk_pid61435 00:32:32.981 Removing: /var/run/dpdk/spdk_pid61473 00:32:32.981 Removing: /var/run/dpdk/spdk_pid61802 00:32:32.981 Removing: /var/run/dpdk/spdk_pid62255 00:32:32.981 Removing: /var/run/dpdk/spdk_pid62509 00:32:32.981 Removing: /var/run/dpdk/spdk_pid63375 00:32:32.981 Removing: /var/run/dpdk/spdk_pid64283 00:32:32.981 Removing: /var/run/dpdk/spdk_pid64400 00:32:32.981 Removing: /var/run/dpdk/spdk_pid64462 00:32:32.981 Removing: /var/run/dpdk/spdk_pid65864 00:32:32.981 Removing: /var/run/dpdk/spdk_pid66169 00:32:32.981 Removing: /var/run/dpdk/spdk_pid69509 00:32:32.981 Removing: /var/run/dpdk/spdk_pid69853 00:32:32.981 Removing: /var/run/dpdk/spdk_pid69967 00:32:32.981 Removing: /var/run/dpdk/spdk_pid70111 00:32:32.981 Removing: /var/run/dpdk/spdk_pid70134 00:32:32.981 Removing: /var/run/dpdk/spdk_pid70158 00:32:32.981 Removing: /var/run/dpdk/spdk_pid70191 00:32:32.981 Removing: /var/run/dpdk/spdk_pid70264 00:32:32.981 Removing: /var/run/dpdk/spdk_pid70402 00:32:32.981 Removing: /var/run/dpdk/spdk_pid70549 00:32:32.981 Removing: /var/run/dpdk/spdk_pid70625 00:32:32.981 Removing: /var/run/dpdk/spdk_pid70806 00:32:32.981 Removing: /var/run/dpdk/spdk_pid70878 00:32:32.981 Removing: /var/run/dpdk/spdk_pid70965 00:32:32.981 Removing: /var/run/dpdk/spdk_pid71313 00:32:32.981 Removing: /var/run/dpdk/spdk_pid71728 00:32:32.981 Removing: /var/run/dpdk/spdk_pid71729 00:32:32.981 Removing: /var/run/dpdk/spdk_pid71730 00:32:32.981 Removing: /var/run/dpdk/spdk_pid71999 00:32:32.981 Removing: /var/run/dpdk/spdk_pid72271 00:32:32.981 Removing: /var/run/dpdk/spdk_pid72657 00:32:32.981 Removing: /var/run/dpdk/spdk_pid72663 00:32:32.981 Removing: /var/run/dpdk/spdk_pid72978 00:32:32.981 Removing: /var/run/dpdk/spdk_pid72992 00:32:32.981 Removing: /var/run/dpdk/spdk_pid73011 00:32:32.981 Removing: /var/run/dpdk/spdk_pid73042 00:32:32.981 Removing: /var/run/dpdk/spdk_pid73047 00:32:32.981 Removing: /var/run/dpdk/spdk_pid73404 00:32:32.981 Removing: /var/run/dpdk/spdk_pid73452 00:32:32.981 Removing: /var/run/dpdk/spdk_pid73773 00:32:32.981 Removing: /var/run/dpdk/spdk_pid73970 00:32:32.981 Removing: /var/run/dpdk/spdk_pid74398 00:32:32.981 Removing: /var/run/dpdk/spdk_pid74973 00:32:32.981 Removing: /var/run/dpdk/spdk_pid75791 00:32:32.981 Removing: /var/run/dpdk/spdk_pid76428 00:32:32.981 Removing: /var/run/dpdk/spdk_pid76431 00:32:32.981 Removing: /var/run/dpdk/spdk_pid78684 00:32:32.981 Removing: /var/run/dpdk/spdk_pid78738 00:32:32.981 Removing: /var/run/dpdk/spdk_pid78793 00:32:32.981 Removing: /var/run/dpdk/spdk_pid78854 00:32:32.981 Removing: /var/run/dpdk/spdk_pid78964 00:32:32.981 Removing: /var/run/dpdk/spdk_pid79018 00:32:32.981 Removing: /var/run/dpdk/spdk_pid79073 00:32:32.981 Removing: /var/run/dpdk/spdk_pid79133 00:32:32.981 Removing: /var/run/dpdk/spdk_pid79493 00:32:32.981 Removing: /var/run/dpdk/spdk_pid80712 00:32:32.981 Removing: /var/run/dpdk/spdk_pid80852 00:32:32.981 Removing: /var/run/dpdk/spdk_pid81104 00:32:32.981 Removing: /var/run/dpdk/spdk_pid81698 00:32:32.981 Removing: /var/run/dpdk/spdk_pid81853 00:32:32.981 Removing: /var/run/dpdk/spdk_pid82015 00:32:32.981 Removing: /var/run/dpdk/spdk_pid82112 00:32:32.981 Removing: /var/run/dpdk/spdk_pid82293 00:32:32.981 Removing: /var/run/dpdk/spdk_pid82402 00:32:32.981 Removing: /var/run/dpdk/spdk_pid83109 00:32:32.982 Removing: /var/run/dpdk/spdk_pid83144 00:32:32.982 Removing: /var/run/dpdk/spdk_pid83185 00:32:32.982 Removing: /var/run/dpdk/spdk_pid83440 00:32:32.982 Removing: /var/run/dpdk/spdk_pid83475 00:32:32.982 Removing: /var/run/dpdk/spdk_pid83516 00:32:32.982 Removing: /var/run/dpdk/spdk_pid83975 00:32:32.982 Removing: /var/run/dpdk/spdk_pid83992 00:32:32.982 Removing: /var/run/dpdk/spdk_pid84225 00:32:32.982 Removing: /var/run/dpdk/spdk_pid84341 00:32:32.982 Removing: /var/run/dpdk/spdk_pid84359 00:32:33.240 Clean 00:32:33.240 07:29:57 -- common/autotest_common.sh@1453 -- # return 0 00:32:33.240 07:29:57 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:33.240 07:29:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.240 07:29:57 -- common/autotest_common.sh@10 -- # set +x 00:32:33.240 07:29:57 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:33.240 07:29:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.240 07:29:57 -- common/autotest_common.sh@10 -- # set +x 00:32:33.240 07:29:57 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:33.240 07:29:57 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:33.240 07:29:57 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:33.240 07:29:57 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:33.240 07:29:57 -- spdk/autotest.sh@398 -- # hostname 00:32:33.240 07:29:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:33.498 geninfo: WARNING: invalid characters removed from testname! 00:33:00.034 07:30:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:00.034 07:30:22 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:01.408 07:30:25 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:03.937 07:30:27 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:05.834 07:30:29 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:07.734 07:30:31 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:10.265 07:30:34 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:10.266 07:30:34 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:10.266 07:30:34 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:33:10.266 07:30:34 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:10.266 07:30:34 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:10.266 07:30:34 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:10.266 + [[ -n 4987 ]] 00:33:10.266 + sudo kill 4987 00:33:10.274 [Pipeline] } 00:33:10.291 [Pipeline] // timeout 00:33:10.296 [Pipeline] } 00:33:10.311 [Pipeline] // stage 00:33:10.316 [Pipeline] } 00:33:10.331 [Pipeline] // catchError 00:33:10.341 [Pipeline] stage 00:33:10.343 [Pipeline] { (Stop VM) 00:33:10.356 [Pipeline] sh 00:33:10.634 + vagrant halt 00:33:13.162 ==> default: Halting domain... 00:33:16.454 [Pipeline] sh 00:33:16.731 + vagrant destroy -f 00:33:19.259 ==> default: Removing domain... 00:33:19.270 [Pipeline] sh 00:33:19.548 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:33:19.562 [Pipeline] } 00:33:19.579 [Pipeline] // stage 00:33:19.585 [Pipeline] } 00:33:19.600 [Pipeline] // dir 00:33:19.605 [Pipeline] } 00:33:19.617 [Pipeline] // wrap 00:33:19.625 [Pipeline] } 00:33:19.637 [Pipeline] // catchError 00:33:19.647 [Pipeline] stage 00:33:19.648 [Pipeline] { (Epilogue) 00:33:19.662 [Pipeline] sh 00:33:19.944 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:25.215 [Pipeline] catchError 00:33:25.216 [Pipeline] { 00:33:25.224 [Pipeline] sh 00:33:25.496 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:25.496 Artifacts sizes are good 00:33:25.504 [Pipeline] } 00:33:25.518 [Pipeline] // catchError 00:33:25.529 [Pipeline] archiveArtifacts 00:33:25.536 Archiving artifacts 00:33:25.644 [Pipeline] cleanWs 00:33:25.655 [WS-CLEANUP] Deleting project workspace... 00:33:25.655 [WS-CLEANUP] Deferred wipeout is used... 00:33:25.660 [WS-CLEANUP] done 00:33:25.662 [Pipeline] } 00:33:25.678 [Pipeline] // stage 00:33:25.684 [Pipeline] } 00:33:25.699 [Pipeline] // node 00:33:25.706 [Pipeline] End of Pipeline 00:33:25.747 Finished: SUCCESS